-
Notifications
You must be signed in to change notification settings - Fork 0
files_that_may_need_integer_division
This is a list of the files, as well as the specific line-by-line suggestions, of where futurize suggested we use "old_div" to preserve the default integer-style division of Py2.7.x
Importantly, "old_div" doesn't work under my standard Anaconda install, because (according to Google) I need to do an extra installation step. I think that is highly not desirable for HARK -- we shouldn't ask users to manually install a special package that that HARK is forced to use the old-style Py2 division in all of the files listed in the "Summary list" section below.
Instead I have recorded (in an appropriate section below) every instance in which futurize recommended using old_div in these files. I saved both a summary list of these files, and in the last section of the document, all output from futurize with the suggested "old_div" replacements.
I then manually added "from future import division" to every file for which futurize recommended using old_div. I then searched for common ways that integers are created and named in our files and checked every instance of the following items: "size", "count", "len(", and ".shape". After that I spot-checked a number of additional random instances of suggested "old_div" usage to help learn if there were any systemtic ways in which we relied upon old-style Py2 implicit integer division, which I did not find aside from the items identified in the next section.
Finally, because Python 3 is very strict about using floats for the things that integers are supposed to be used for -- particularly indexing lists or other array-likes, any reliance on implicit Python 2 integer division can be identified and addressed in a cross-compatible way. In particular, I use the "//" explicit integer division when it is clear that this is needed. I've outlined where I did this in a section below.
The double-slash integer division operator should explicitly do what the original py2 implicit integer division does, and // works in both Py2 and Py3. However I'd like other eyes on this to confirm.
-
For ./HARK/HARK/cstwMPC/cstwMPC.py -- the following integer division needs to be confirmed as correct:
- This line: replication_factor = len(self.agents)/param_count
- replaced with this line: replication_factor = len(self.agents) // param_count
-
For ./HARK/HARK/cAndCwithStickyE/StickyE_NO_MARKOV.py -- the following integer division needs to be confirmed as correct:
- This line: interval_count = (total_periods-ignore_periods) / interval_size # Number of intervals in the macro regressions
- replaced with this line: interval_count = (total_periods-ignore_periods) // interval_size # Number of intervals in the macro regressions
-
For ./HARK/HARK/cAndCwithStickyE/StickyE_MAIN.py -- the following integer division needs to be confirmed as correct:
- This line: interval_count = (total_periods-ignore_periods)/interval_size # Number of intervals in the macro regressions
- replaced with this line: interval_count = (total_periods-ignore_periods) // interval_size # Number of intervals in the macro regressions
-
For ./HARK/cAndCwithStickyE/StickyEtools.py -- the following integer division needs to be confirmed as correct:
- This line: N = DeltaLogC.size/interval_size
- replaced with this line: N = DeltaLogC.size // interval_size
-
For ./HARK/cAndCwithStickyE/StickyEparams.py -- the following integer division needs to be confirmed as correct:
- These lines:
'AgentCount': AgentCount/TypeCount, # Spread agents evenly among types ... init_SOE_mrkv_market['MrkvNow_init'] = StateCount/2 ... init_RA_mrkv_consumer['MrkvNow'] = [StateCount/2] - replaced with these lines, respectively: 'AgentCount': AgentCount // TypeCount, # Spread agents evenly among types ... init_SOE_mrkv_market['MrkvNow_init'] = StateCount // 2 ... init_RA_mrkv_consumer['MrkvNow'] = [StateCount // 2]
- These lines:
...and I manually added "from future import division" to enforce float division unless explicitly using "//" for integer division.
Note that this list includes all files listed in the "Explicitly include //" section above, even if those changes addressed all instances of "old_div."
- ./HARK/cAndCwithStickyE/StickyEparams.py
- ./HARK/cAndCwithStickyE/StickyE_MAIN.py
- ./HARK/cAndCwithStickyE/StickyE_NO_MARKOV.py
- ./HARK/cAndCwithStickyE/StickyEmodel.py
- ./HARK/cstwMPC/MakeCSTWfigsForSlides.py
- ./HARK/cstwMPC/SetupParamsCSTW.py
- ./HARK/cstwMPC/cstwMPC.py
- ./HARK/FashionVictim/FashionVictimModel.py
- ./HARK/ConsumptionSaving/Demos/Chinese_Growth.py
- ./HARK/ConsumptionSaving/Demos/MPC_credit_vs_MPC_income.py
- ./HARK/ConsumptionSaving/Demos/Fagereng_demo.py
- ./HARK/ConsumptionSaving/ConsMedModel.py
- ./HARK/ConsumptionSaving/ConsPrefShockModel.py
- ./HARK/ConsumptionSaving/RepAgentModel.py
- ./HARK/ConsumptionSaving/ConsGenIncProcessModel.py
- ./HARK/ConsumptionSaving/ConsRepAgentModel.py
- ./HARK/ConsumptionSaving/ConsMarkovModel.py
- ./HARK/ConsumptionSaving/TractableBufferStockModel.py
- ./HARK/ConsumptionSaving/ConsAggShockModel.py
- ./HARK/ConsumptionSaving/ConsIndShockModel.py
- interpolation.py
- ./HARK/parallel.py
...and details for each file below:
--- ./HARK/cAndCwithStickyE/StickyEparams.py (original) +++ ./HARK/cAndCwithStickyE/StickyEparams.py (refactored) @@ -12,7 +12,10 @@ For the first four models (heterogeneous agents), it defines dictionaries for the Market instance as well as the consumers themselves. All parameters are quarterly. '''
+from future import division + +from builtins import range +from past.utils import old_div import numpy as np from copy import copy from HARK.utilities import approxUniform @@ -59,14 +62,14 @@
DeprFac = 1. - DeprFacAnn0.25 # Quarterly depreciation rate -KSS = KtYratioSS = KYratioSS(1./(1.-CapShare)) # Steady state Capital to labor productivity +KSS = KtYratioSS = KYratioSS**(old_div(1.,(1.-CapShare))) # Steady state Capital to labor productivity wRteSS = (1.-CapShare)KSS**CapShare # Steady state wage rate rFreeSS = CapShareKSS**(CapShare-1.) # Steady state interest rate RfreeSS = 1. - DeprFac + rFreeSS # Steady state return factor LivPrb = 1. - DiePrb # Quarterly survival probability DiscFacDSGE = RfreeSS**(-1) # Discount factor, HA-DSGE and RA models TranShkVar = TranShkVarAnn4. # Variance of idiosyncratic transitory shocks -PermShkVar = PermShkVarAnn/4. # Variance of idiosyncratic permanent shocks +PermShkVar = old_div(PermShkVarAnn,4.) # Variance of idiosyncratic permanent shocks #TempDstn = approxMeanOneLognormal(N=7,sigma=np.sqrt(PermShkVar)) #DiscFacSOE = 0.99LivPrb/(RfreeSS*np.dot(TempDstn[0],TempDstn[1]**(-CRRA))) # Discount factor, SOE model
@@ -110,7 +113,7 @@ PolyMrkvArray[0,0] += 0.5*(1.0 - Persistence) PolyMrkvArray[StateCount-1,StateCount-1] += 0.5*(1.0 - Persistence) PolyMrkvArray *= 1.0 - RegimeChangePrb -PolyMrkvArray += RegimeChangePrb/StateCount +PolyMrkvArray += old_div(RegimeChangePrb,StateCount)
PermGroFacSet = np.exp(np.linspace(np.log(PermGroFacMin),np.log(PermGroFacMax),num=StateCount)) @@ -126,7 +129,7 @@ 'DiscFac': DiscFacMeanSOE, 'LivPrb': [LivPrb], 'PermGroFac': [1.0],
-
'AgentCount': AgentCount/TypeCount, # Spread agents evenly among types
-
'AgentCount': old_div(AgentCount,TypeCount), # Spread agents evenly among types 'aXtraMin': 0.00001, 'aXtraMax': 40.0, 'aXtraNestFac': 3,
@@ -180,7 +183,7 @@ init_SOE_mrkv_market['PermShkAggStd'] = StateCount*[init_SOE_market['PermShkAggStd']] init_SOE_mrkv_market['TranShkAggStd'] = StateCount*[init_SOE_market['TranShkAggStd']] init_SOE_mrkv_market['PermGroFacAgg'] = PermGroFacSet -init_SOE_mrkv_market['MrkvNow_init'] = StateCount/2 +init_SOE_mrkv_market['MrkvNow_init'] = old_div(StateCount,2) init_SOE_mrkv_market['loops_max'] = 1
############################################################################### @@ -259,5 +262,5 @@
init_RA_mrkv_consumer = copy(init_RA_consumer) init_RA_mrkv_consumer['MrkvArray'] = PolyMrkvArray -init_RA_mrkv_consumer['MrkvNow'] = [StateCount/2] +init_RA_mrkv_consumer['MrkvNow'] = [old_div(StateCount,2)] init_RA_mrkv_consumer['PermGroFac'] = [PermGroFacSet]
--- ./HARK/cAndCwithStickyE/StickyEtools.py (original) +++ ./HARK/cAndCwithStickyE/StickyEtools.py (refactored) @@ -1,7 +1,12 @@ """ This module holds some data tools used in the cAndCwithStickyE project. """
+from future import division +from future import absolute_import + +from builtins import str +from builtins import range +from past.utils import old_div import os import csv import numpy as np @@ -13,7 +18,7 @@ import subprocess from HARK.utilities import CRRAutility from HARK.interpolation import LinearInterp -from StickyEparams import results_dir, tables_dir, figures_dir, UpdatePrb, PermShkAggVar +from .StickyEparams import results_dir, tables_dir, figures_dir, UpdatePrb, PermShkAggVar UpdatePrbBase = UpdatePrb PermShkAggVarBase = PermShkAggVar
@@ -84,11 +89,11 @@ # PermShkAggHist needs to be shifted one period forward PlvlAgg_hist = np.cumprod(np.concatenate(([1.0],Economy.PermShkAggHist[:-1]),axis=0)) AlvlAgg_hist = np.mean(aLvlAll_hist,axis=1) # Level of aggregate assets
-
AnrmAgg_hist = AlvlAgg_hist/PlvlAgg_hist # Normalized level of aggregate assets
-
AnrmAgg_hist = old_div(AlvlAgg_hist,PlvlAgg_hist) # Normalized level of aggregate assets ClvlAgg_hist = np.mean(cLvlAll_hist,axis=1) # Level of aggregate consumption
-
CnrmAgg_hist = ClvlAgg_hist/PlvlAgg_hist # Normalized level of aggregate consumption
-
CnrmAgg_hist = old_div(ClvlAgg_hist,PlvlAgg_hist) # Normalized level of aggregate consumption YlvlAgg_hist = np.mean(yLvlAll_hist,axis=1) # Level of aggregate income
-
YnrmAgg_hist = YlvlAgg_hist/PlvlAgg_hist # Normalized level of aggregate income
-
YnrmAgg_hist = old_div(YlvlAgg_hist,PlvlAgg_hist) # Normalized level of aggregate income if calc_micro_stats: # Only calculate stats if requested. This is a memory hog with many simulated periods micro_stat_periods = int((Economy.agents[0].T_sim-ignore_periods)*0.1)
@@ -114,17 +119,17 @@ if ~hasattr(Economy,'Rfree'): # If this is a markov DSGE specification... # Find the expected interest rate - approximate by assuming growth = expected growth ExpectedGrowth_hist = Economy.PermGroFacAgg[Mrkv_hist]
-
ExpectedKLRatio_hist = AnrmAgg_hist/ExpectedGrowth_hist
-
ExpectedKLRatio_hist = old_div(AnrmAgg_hist,ExpectedGrowth_hist) ExpectedR_hist = Economy.Rfunc(ExpectedKLRatio_hist)
else: # If this is a representative agent specification... PlvlAgg_hist = Economy.pLvlTrue_hist.flatten() ClvlAgg_hist = Economy.cLvlNow_hist.flatten()
-
CnrmAgg_hist = ClvlAgg_hist/PlvlAgg_hist.flatten()
-
CnrmAgg_hist = old_div(ClvlAgg_hist,PlvlAgg_hist.flatten()) YnrmAgg_hist = Economy.yNrmTrue_hist.flatten() YlvlAgg_hist = YnrmAgg_hist*PlvlAgg_hist.flatten() AlvlAgg_hist = Economy.aLvlNow_hist.flatten()
-
AnrmAgg_hist = AlvlAgg_hist/PlvlAgg_hist.flatten()
-
AnrmAgg_hist = old_div(AlvlAgg_hist,PlvlAgg_hist.flatten()) BigTheta_hist = Economy.TranShkNow_hist.flatten() if hasattr(Economy,'MrkvNow'): Mrkv_hist = Economy.MrkvNow_hist
@@ -256,7 +261,7 @@ R[i] = float(all_data[j][11])
# Determine how many subsample intervals to run (and initialize array of coefficients)
- N = DeltaLogC.size/interval_size
-
N = old_div(DeltaLogC.size,interval_size) CoeffsArray = np.zeros((N,7)) # Order: DeltaLogC_OLS, DeltaLogC_IV, DeltaLogY_IV, A_OLS, DeltaLogC_HR, DeltaLogY_HR, A_HR StdErrArray = np.zeros((N,7)) # Same order as above RsqArray = np.zeros((N,5)) @@ -351,7 +356,7 @@ InstrRsqVec[n] = res._results.rsquared_adj
- t_stat_array = CoeffsArray/StdErrArray
- t_stat_array = old_div(CoeffsArray,StdErrArray) C_successes_95 = np.sum(t_stat_array[:,4] > 1.96) Y_successes_95 = np.sum(t_stat_array[:,5] > 1.96)
@@ -486,7 +491,7 @@ span = t1-t0 j = MrkvHist[t0] # Calculate discounted flow of utility for this agent and store it
-
cVec = cLvlHist[t0:t1,i]/PlvlHist[t0]
-
cVec = old_div(cLvlHist[t0:t1,i],PlvlHist[t0]) uVec = u(cVec) v = np.dot(DiscVec[:span],uVec) vArray[j,n[j]] = v
@@ -555,7 +560,7 @@ else: sig_symb = '*' def sigFunc(coeff,stderr):
-
z_stat = np.abs(coeff/stderr)
-
z_stat = np.abs(old_div(coeff,stderr)) cuts = np.array([1.645,1.96,2.576]) N = np.sum(z_stat > cuts) if N > 0:
@@ -806,8 +811,8 @@ vBirth_DSGE_S = np.genfromtxt(results_dir + four_in_files[3] + 'BirthValue.csv', delimiter=',')
# Calculate the cost of stickiness in the SOE and DSGE models
- StickyCost_SOE = np.mean(1. - (vBirth_SOE_S/vBirth_SOE_F)**(1./(1.-CRRA)))
- StickyCost_DSGE = np.mean(1. - (vBirth_DSGE_S/vBirth_DSGE_F)**(1./(1.-CRRA)))
-
StickyCost_SOE = np.mean(1. - (old_div(vBirth_SOE_S,vBirth_SOE_F))**(old_div(1.,(1.-CRRA))))
-
StickyCost_DSGE = np.mean(1. - (old_div(vBirth_DSGE_S,vBirth_DSGE_F))**(old_div(1.,(1.-CRRA))))
paper_top = "\begin{minipage}{\textwidth}\n" paper_top += " \begin{table} \n" @@ -905,10 +910,10 @@ c_matrix = deepcopy(agent.cLvlNow_hist[(ignore_periods+1):ignore_periods+num_periods+1,0:AgentCount]) y_matrix = deepcopy(agent.yLvlNow_hist[(ignore_periods+1):ignore_periods+num_periods+1,0:AgentCount]) total_trans_shk_matrix = deepcopy(agent.TranShkNow_hist[(ignore_periods+1):ignore_periods+num_periods+1,0:AgentCount])
- trans_shk_matrix = total_trans_shk_matrix/(np.array(agg_trans_shk_matrix)*np.array(wRte_matrix))[:,None]
- trans_shk_matrix = old_div(total_trans_shk_matrix,(np.array(agg_trans_shk_matrix)*np.array(wRte_matrix))[:,None]) a_matrix = deepcopy(agent.aLvlNow_hist[(ignore_periods+1):ignore_periods+num_periods+1,0:AgentCount]) pLvlTrue_matrix = deepcopy(agent.pLvlTrue_hist[(ignore_periods+1):ignore_periods+num_periods+1,0:AgentCount])
- a_matrix_nrm = a_matrix/pLvlTrue_matrix
-
a_matrix_nrm = old_div(a_matrix,pLvlTrue_matrix) age_matrix = deepcopy(agent.t_age_hist[(ignore_periods+1):ignore_periods+num_periods+1,0:AgentCount])
@@ -1097,8 +1102,8 @@ plt.close()
# Plot uCost vs 1/Pi
- plt.plot(1./UpdatePrbVec,uCostVec*10000,color='#1f77b4')
- plt.plot([1./UpdatePrbBase,1./UpdatePrbBase],[0.,f(UpdatePrbBase)],'--k') # Add dashed line at Pi=0.25
-
plt.plot(old_div(1.,UpdatePrbVec),uCostVec*10000,color='#1f77b4')
-
plt.plot([old_div(1.,UpdatePrbBase),old_div(1.,UpdatePrbBase)],[0.,f(UpdatePrbBase)],'--k') # Add dashed line at Pi=0.25 plt.xlim([1.0,16.0]) plt.ylim([0.0,35.0]) plt.xlabel('Expected periods between information updates
$\Pi^{-1}$ ') @@ -1161,10 +1166,10 @@Plot birth value vs 1/UpdatePrb
f = LinearInterp(UpdatePrbVec,vVec)
- vBot = f(1./16)
- vBot = f(old_div(1.,16)) vTop = np.max(vVec)
- plt.plot(1./UpdatePrbVec,vVec,color='#1f77b4')
- plt.plot([1./UpdatePrbBase,1./UpdatePrbBase],[vBot,f(UpdatePrbBase)],'--k')
- plt.plot(old_div(1.,UpdatePrbVec),vVec,color='#1f77b4')
- plt.plot([old_div(1.,UpdatePrbBase),old_div(1.,UpdatePrbBase)],[vBot,f(UpdatePrbBase)],'--k')
plt.xlim([1.0,16.0])
plt.ylim(vBot,vTop)
plt.xlabel('Expected periods between information updates
$\Pi^{-1}$ ')
--- ./HARK/cAndCwithStickyE/StickyE_MAIN.py (original) +++ ./HARK/cAndCwithStickyE/StickyE_MAIN.py (refactored) @@ -5,18 +5,24 @@ file in the results directory. TeX code for tables in the paper are saved in the tables directory. See StickyEparams for calibrated model parameters. '''
+from future import division
+from future import print_function
+from future import absolute_import
+
+from builtins import str
+from builtins import range
+from past.utils import old_div
import os
import numpy as np
import csv
from time import clock
from copy import deepcopy
-from StickyEmodel import StickyEmarkovConsumerType, StickyEmarkovRepAgent, StickyCobbDouglasMarkovEconomy
+from .StickyEmodel import StickyEmarkovConsumerType, StickyEmarkovRepAgent, StickyCobbDouglasMarkovEconomy
from HARK.ConsumptionSaving.ConsAggShockModel import SmallOpenMarkovEconomy
from HARK.utilities import plotFuncs
import matplotlib.pyplot as plt
-import StickyEparams as Params
-from StickyEtools import makeStickyEdataFile, runStickyEregressions, makeResultsTable,
+from . import StickyEparams as Params
+from .StickyEtools import makeStickyEdataFile, runStickyEregressions, makeResultsTable,
runStickyEregressionsInStata, makeParameterTable, makeEquilibriumTable,
makeMicroRegressionTable, extractSampleMicroData, makeuCostVsPiFig,
makeValueVsAggShkVarFig, makeValueVsPiFig
@@ -38,7 +44,7 @@
ignore_periods = Params.ignore_periods # Number of simulated periods to ignore as a "burn-in" phase
interval_size = Params.interval_size # Number of periods in each non-overlapping subsample
total_periods = Params.periods_to_sim # Total number of periods in simulation
-interval_count = (total_periods-ignore_periods)/interval_size # Number of intervals in the macro regressions
+interval_count = old_div((total_periods-ignore_periods),interval_size) # Number of intervals in the macro regressions
periods_to_sim_micro = Params.periods_to_sim_micro # To save memory, micro regressions are run on a smaller sample
AgentCount_micro = Params.AgentCount_micro # To save memory, micro regressions are run on a smaller sample
my_counts = [interval_size,interval_count]
@@ -141,7 +147,7 @@
StickySOmarkovEconomy.makeHistory()
makeStickyEdataFile(StickySOmarkovEconomy,ignore_periods,description='trash',filename='TEMP',save_data=False,calc_micro_stats=True)
vBirth_S = np.genfromtxt(results_dir + 'TEMPBirthValue.csv', delimiter=',')
-
uCost = np.mean(1. - (vBirth_S/vBirth_F)**(1./(1.-CRRA)))
-
uCost = np.mean(1. - (old_div(vBirth_S,vBirth_F))**(old_div(1.,(1.-CRRA)))) uCostVec[j] = uCost vVec[j] = np.mean(vBirth_S) print('Found that uCost=' + str(uCost) + ' for Pi=' + str(UpdatePrbVec[j]))
--- ./HARK/cAndCwithStickyE/StickyE_NO_MARKOV.py (original) +++ ./HARK/cAndCwithStickyE/StickyE_NO_MARKOV.py (refactored) @@ -9,16 +9,22 @@ file in the ./Results directory. TeX code for tables in the paper are saved in the ./Tables directory. See StickyEparams for calibrated model parameters. '''
+from future import division
+from future import print_function
+from future import absolute_import
+
+from builtins import str
+from builtins import range
+from past.utils import old_div
import numpy as np
from time import clock
from copy import deepcopy
-from StickyEmodel import StickyEconsumerType, StickyErepAgent, StickyCobbDouglasEconomy
+from .StickyEmodel import StickyEconsumerType, StickyErepAgent, StickyCobbDouglasEconomy
from HARK.ConsumptionSaving.ConsAggShockModel import SmallOpenEconomy
from HARK.utilities import plotFuncs
import matplotlib.pyplot as plt
-import StickyEparams as Params
-from StickyEtools import makeStickyEdataFile, runStickyEregressions, makeResultsTable,
+from . import StickyEparams as Params
+from .StickyEtools import makeStickyEdataFile, runStickyEregressions, makeResultsTable,
runStickyEregressionsInStata, makeParameterTable, makeEquilibriumTable,
makeMicroRegressionTable, extractSampleMicroData, makeuCostVsPiFig
@@ -38,7 +44,7 @@ ignore_periods = Params.ignore_periods # Number of simulated periods to ignore as a "burn-in" phase interval_size = Params.interval_size # Number of periods in each non-overlapping subsample total_periods = Params.periods_to_sim # Total number of periods in simulation -interval_count = (total_periods-ignore_periods)/interval_size # Number of intervals in the macro regressions +interval_count = old_div((total_periods-ignore_periods),interval_size) # Number of intervals in the macro regressions periods_to_sim_micro = Params.periods_to_sim_micro # To save memory, micro regressions are run on a smaller sample AgentCount_micro = Params.AgentCount_micro # To save memory, micro regressions are run on a smaller sample my_counts = [interval_size,interval_count]
--- ./HARK/cAndCwithStickyE/StickyEmodel.py (original) +++ ./HARK/cAndCwithStickyE/StickyEmodel.py (refactored) @@ -16,7 +16,10 @@ project. Non-Markov AgentTypes are imported by StickyE_NO_MARKOV. Calibrated parameters for each type are found in StickyEparams. '''
+from future import division + +from builtins import range +from past.utils import old_div import numpy as np from ConsAggShockModel import AggShockConsumerType, AggShockMarkovConsumerType, CobbDouglasEconomy, CobbDouglasMarkovEconomy from RepAgentModel import RepAgentConsumerType, RepAgentMarkovConsumerType @@ -86,7 +89,7 @@ Array of size AgentCount with this period's (new) misperception. ''' pLvlErr = np.ones(self.AgentCount)
-
pLvlErr[self.dont] = self.PermShkAggNow/self.PermGroFacAgg
-
pLvlErr[self.dont] = old_div(self.PermShkAggNow,self.PermGroFacAgg) return pLvlErr
@@ -120,7 +123,7 @@ self.pLvlErrNow *= pLvlErrNew # Perception error accumulation
# Calculate (mis)perceptions of the permanent shock
-
PermShkPcvd = self.PermShkNow/pLvlErrNew
-
PermShkPcvd = old_div(self.PermShkNow,pLvlErrNew) PermShkPcvd[self.update] *= self.pLvlErrNow[self.update] # Updaters see the true permanent shock and all missed news self.pLvlErrNow[self.update] = 1.0 self.PermShkNow = PermShkPcvd
@@ -152,7 +155,7 @@
yLvlNow = self.pLvlTrue*self.TranShkNow # This is true income level
mLvlTrueNow = bLvlNow + yLvlNow # This is true market resource level
-
mNrmPcvdNow = mLvlTrueNow/self.pLvlNow # This is perceived normalized resources
-
mNrmPcvdNow = old_div(mLvlTrueNow,self.pLvlNow) # This is perceived normalized resources self.mNrmNow = mNrmPcvdNow self.mLvlTrueNow = mLvlTrueNow self.yLvlNow = yLvlNow # Only labor income
@@ -192,7 +195,7 @@ AggShockConsumerType.getPostStates(self) self.cLvlNow = self.cNrmNow*self.pLvlNow # True consumption level self.aLvlNow = self.mLvlTrueNow - self.cLvlNow # True asset level
-
self.aNrmNow = self.aLvlNow/self.pLvlNow # Perceived normalized assets
-
self.aNrmNow = old_div(self.aLvlNow,self.pLvlNow) # Perceived normalized assets
@@ -264,7 +267,7 @@ Array of size AgentCount with this period's (new) misperception. ''' pLvlErr = np.ones(self.AgentCount)
-
pLvlErr[self.dont] = self.PermShkAggNow/self.PermGroFacAgg[self.MrkvNowPcvd[self.dont]]
-
pLvlErr[self.dont] = old_div(self.PermShkAggNow,self.PermGroFacAgg[self.MrkvNowPcvd[self.dont]]) return pLvlErr
@@ -318,7 +321,7 @@ self.pLvlNow = self.getpLvlPcvd()
# Calculate true values of variables
-
self.kNrmTrue = aLvlPrev/self.pLvlTrue
-
self.kNrmTrue = old_div(aLvlPrev,self.pLvlTrue) self.yNrmTrue = self.kNrmTrue**self.CapShare*self.TranShkNow**(1.-self.CapShare) - self.kNrmTrue*self.DeprFac self.Rfree = 1. + self.CapShare*self.kNrmTrue**(self.CapShare-1.)*self.TranShkNow**(1.-self.CapShare) - self.DeprFac self.wRte = (1.-self.CapShare)*self.kNrmTrue**self.CapShare*self.TranShkNow**(-self.CapShare)
@@ -326,7 +329,7 @@ self.mLvlTrue = self.mNrmTrue*self.pLvlTrue
# Calculate perception of normalized market resources
-
self.mNrmNow = self.mLvlTrue/self.pLvlNow
-
self.mNrmNow = old_div(self.mLvlTrue,self.pLvlNow)
def getControls(self): @@ -349,7 +352,7 @@ ''' RepAgentConsumerType.getPostStates(self) self.aLvlNow = self.mLvlTrue - self.cLvlNow # This is true
-
self.aNrmNow = self.aLvlNow/self.pLvlTrue # This is true
-
self.aNrmNow = old_div(self.aLvlNow,self.pLvlTrue) # This is true
def getpLvlPcvd(self): ''' @@ -445,7 +448,7 @@
# Combine updaters and non-updaters to get average pLvl perception by Markov state self.MrkvPcvd = dont_mass + update_mass # Total mass of agent in each state
-
pLvlPcvd = (dont_mass*dont_pLvlPcvd + update_mass*update_pLvlPcvd)/self.MrkvPcvd
-
pLvlPcvd = old_div((dont_mass*dont_pLvlPcvd + update_mass*update_pLvlPcvd),self.MrkvPcvd) pLvlPcvd[self.MrkvPcvd==0.] = 1.0 # Fix division by zero problem when MrkvPcvd[i]=0 return pLvlPcvd
--- ./HARK/cstwMPC/MakeCSTWfigsForSlides.py (original) +++ ./HARK/cstwMPC/MakeCSTWfigsForSlides.py (refactored) @@ -3,8 +3,14 @@ All Booleans at the top of SetupParamsCSTW should be set to False, as this module imports cstwMPC; there's no need to actually do anything but load the model. ''' +from future import division +from future import print_function +from future import absolute_import
-from cstwMPC import * +from builtins import str +from builtins import range +from past.utils import old_div +from .cstwMPC import * import matplotlib.pyplot as plt
plot_range = (0.0,30.0) @@ -39,8 +45,8 @@ h = m[2] - m[0] m_dist = np.zeros(m.shape) + np.nan for j in range(m.size):
-
x = (m_temp - m[j])/h
-
m_dist[j] = np.sum(epaKernel(x))/(n*h)
-
x = old_div((m_temp - m[j]),h)
-
m_dist[j] = old_div(np.sum(epaKernel(x)),(n*h))
print('did beta= ' + str(beta)) return c, m_dist @@ -54,7 +60,7 @@ for b in range(17): beta = 0.978 + b*0.001 highest = np.max(pdf_array[b,])
- scale = 1.5/highest
- scale = old_div(1.5,highest) scale = 4.0 plt.ylim(0,2.5) plt.plot(m,scale*pdf_array[b,],'-c')
--- ./HARK/cstwMPC/SetupParamsCSTW.py (original) +++ ./HARK/cstwMPC/SetupParamsCSTW.py (refactored) @@ -1,6 +1,10 @@ ''' Loads parameters used in the cstwMPC estimations. ''' +from future import division +from future import print_function +from builtins import range +from past.utils import old_div import numpy as np import csv from copy import deepcopy @@ -83,17 +87,17 @@ TypeWeight_lifecycle = [d_pct,h_pct,c_pct]
-IndL = 10.0/9.0 # Labor supply per individual (constant) +IndL = old_div(10.0,9.0) # Labor supply per individual (constant) PermGroFac_i = [1.000**0.25] # Permanent income growth factor (no perm growth) DiscFac_i = 0.97 # Default intertemporal discount factor -LivPrb_i = [1.0 - 1.0/160.0] # Survival probability +LivPrb_i = [1.0 - old_div(1.0,160.0)] # Survival probability PermShkStd_i = [(0.014/11)**0.5] # Standard deviation of permanent shocks to income TranShkStd_i = [(0.014)**0.5] # Standard deviation of transitory shocks to income
TranShkStd = (np.concatenate((np.linspace(0.1,0.12,17), 0.12*np.ones(17), np.linspace(0.12,0.075,61), np.linspace(0.074,0.007,68), np.zeros(retired_T+1)))*4)*0.5 TranShkStd = np.ndarray.tolist(TranShkStd) -PermShkStd = np.concatenate((((0.00011342(np.linspace(24,64.75,working_T-1)-47)**2 + 0.01)/(11.0/4.0))*0.5,np.zeros(retired_T+1))) +PermShkStd = np.concatenate(((old_div((0.00011342(np.linspace(24,64.75,working_T-1)-47)**2 + 0.01),(old_div(11.0,4.0))))**0.5,np.zeros(retired_T+1))) PermShkStd = np.ndarray.tolist(PermShkStd)
@@ -214,7 +218,7 @@
init_infinite = {"CRRA":CRRA,
-
"Rfree":1.01/LivPrb_i[0],
-
"Rfree":old_div(1.01,LivPrb_i[0]), "PermGroFac":PermGroFac_i, "PermGroFacAgg":1.0, "BoroCnstArt":BoroCnstArt,
--- ./HARK/cstwMPC/cstwMPC.py (original) +++ ./HARK/cstwMPC/cstwMPC.py (refactored) @@ -1,7 +1,13 @@ ''' A second stab / complete do-over of cstwMPC. Steals some bits from old version. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import str +from builtins import range +from past.utils import old_div import numpy as np from copy import copy, deepcopy from time import clock @@ -9,7 +15,7 @@ getPercentiles, getLorenzShares, calcSubpopAvg, approxLognormal from HARK.simulation import drawDiscrete from HARK import Market -import SetupParamsCSTW as Params +from . import SetupParamsCSTW as Params import HARK.ConsumptionSaving.ConsIndShockModel as Model from HARK.ConsumptionSaving.ConsAggShockModel import CobbDouglasEconomy, AggShockConsumerType from scipy.optimize import golden, brentq @@ -35,11 +41,11 @@ self.t_cycle = copy(self.t_age) if hasattr(self,'kGrid'): self.aLvlNow = self.kInit*np.ones(self.AgentCount) # Start simulation near SS
-
self.aNrmNow = self.aLvlNow/self.pLvlNow
-
self.aNrmNow = old_div(self.aLvlNow,self.pLvlNow)
def marketAction(self): if hasattr(self,'kGrid'):
-
self.pLvl = self.pLvlNow/np.mean(self.pLvlNow)
-
self.pLvl = old_div(self.pLvlNow,np.mean(self.pLvlNow)) self.simulate(1)
def updateIncomeProcess(self): @@ -55,7 +61,7 @@ none ''' if self.cycles == 0:
-
tax_rate = (self.IncUnemp*self.UnempPrb)/((1.0-self.UnempPrb)*self.IndL)
-
tax_rate = old_div((self.IncUnemp*self.UnempPrb),((1.0-self.UnempPrb)*self.IndL)) TranShkDstn = deepcopy(approxMeanOneLognormal(self.TranShkCount,sigma=self.TranShkStd[0],tail_N=0)) TranShkDstn[0] = np.insert(TranShkDstn[0]*(1.0-self.UnempPrb),0,self.UnempPrb) TranShkDstn[1] = np.insert(TranShkDstn[1]*(1.0-tax_rate)*self.IndL,0,self.IncUnemp)
@@ -150,7 +156,7 @@ CohortWeight = self.PopGroFac**(-age) CapAgg = np.sum(aLvlCohortWeight) IncAgg = np.sum(pLvlTranShk*CohortWeight)
-
KtoYnow = CapAgg/IncAgg
-
KtoYnow = old_div(CapAgg,IncAgg) self.KtoYnow = KtoYnow # Store Lorenz data if requested
@@ -178,12 +184,12 @@ TranShk = TranShk[order] age = age[order] Emp = Emp[order]
-
aNrm = aLvl/pLvl # Normalized assets (wealth ratio)
-
aNrm = old_div(aLvl,pLvl) # Normalized assets (wealth ratio) IncLvl = TranShk*pLvl # Labor income this period # Calculate overall population MPC and by subpopulations MPCannual = 1.0 - (1.0 - MPC)**4
-
self.MPCall = np.sum(MPCannual*CohortWeight)/np.sum(CohortWeight)
-
self.MPCall = old_div(np.sum(MPCannual*CohortWeight),np.sum(CohortWeight)) employed = Emp unemployed = np.logical_not(employed) if self.T_retire > 0: # Adjust for the lifecycle model, where agents might be retired instead
@@ -192,9 +198,9 @@ retired = age >= self.T_retire else: retired = np.zeros_like(unemployed,dtype=bool)
-
self.MPCunemployed = np.sum(MPCannual[unemployed]*CohortWeight[unemployed])/np.sum(CohortWeight[unemployed])
-
self.MPCemployed = np.sum(MPCannual[employed]*CohortWeight[employed])/np.sum(CohortWeight[employed])
-
self.MPCretired = np.sum(MPCannual[retired]*CohortWeight[retired])/np.sum(CohortWeight[retired])
-
self.MPCunemployed = old_div(np.sum(MPCannual[unemployed]*CohortWeight[unemployed]),np.sum(CohortWeight[unemployed]))
-
self.MPCemployed = old_div(np.sum(MPCannual[employed]*CohortWeight[employed]),np.sum(CohortWeight[employed]))
-
self.MPCretired = old_div(np.sum(MPCannual[retired]*CohortWeight[retired]),np.sum(CohortWeight[retired])) self.MPCbyWealthRatio = calcSubpopAvg(MPCannual,aNrm,self.cutoffs,CohortWeight) self.MPCbyIncome = calcSubpopAvg(MPCannual,IncLvl,self.cutoffs,CohortWeight)
@@ -205,14 +211,14 @@ wealth_quintiles[aLvl > quintile_cuts[1]] = 3 wealth_quintiles[aLvl > quintile_cuts[2]] = 4 wealth_quintiles[aLvl > quintile_cuts[3]] = 5
-
MPC_cutoff = getPercentiles(MPCannual,weights=CohortWeight,percentiles=[2.0/3.0]) # Looking at consumers with MPCs in the top 1/3
-
MPC_cutoff = getPercentiles(MPCannual,weights=CohortWeight,percentiles=[old_div(2.0,3.0)]) # Looking at consumers with MPCs in the top 1/3 these = MPCannual > MPC_cutoff in_top_third_MPC = wealth_quintiles[these] temp_weights = CohortWeight[these] hand_to_mouth_total = np.sum(temp_weights) hand_to_mouth_pct = [] for q in range(1,6):
-
hand_to_mouth_pct.append(np.sum(temp_weights[in_top_third_MPC == q])/hand_to_mouth_total)
-
hand_to_mouth_pct.append(old_div(np.sum(temp_weights[in_top_third_MPC == q]),hand_to_mouth_total)) self.HandToMouthPct = np.array(hand_to_mouth_pct) else: # If we don't want these stats, just put empty values in history
@@ -256,7 +262,7 @@
# Distribute the parameters to the various types, assigning consecutive types the same
# value if there are more types than values
-
replication_factor = len(self.agents)/param_count
-
replication_factor = old_div(len(self.agents),param_count) j = 0 b = 0 while j < len(self.agents):
@@ -483,7 +489,7 @@ w, v = np.linalg.eig(np.transpose(MrkvArray)) idx = (np.abs(w-1.0)).argmin() x = v[:,idx].astype(float)
- AgeDstn = (x/np.sum(x))
- AgeDstn = (old_div(x,np.sum(x))) return AgeDstn
####################################################################################################
--- ./HARK/FashionVictim/FashionVictimModel.py (original) +++ ./HARK/FashionVictim/FashionVictimModel.py (refactored) @@ -4,13 +4,20 @@ based on the proportion of the population with the same style (as well as direct preferences each style), and pay switching costs if they change. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import str +from builtins import range +from past.utils import old_div +from builtins import object from HARK import AgentType, Solution, NullFunc from HARK.interpolation import LinearInterp from HARK.utilities import approxUniform, plotFuncs import numpy as np import scipy.stats as stats -import FashionVictimParams as Params +from . import FashionVictimParams as Params from copy import copy
class FashionSolution(Solution): @@ -85,7 +92,7 @@ self.distance_criteria = ['pNextSlope','pNextWidth','pNextIntercept']
-class FashionMarketInfo(): +class FashionMarketInfo(object): ''' A class for representing the current distribution of styles in the population. ''' @@ -331,19 +338,19 @@ Vboth_P = np.vstack((V_P2J,V_P2P)) Vbest_P = np.max(Vboth_P,axis=0) Vnorm_P = Vboth_P - np.tile(np.reshape(Vbest_P,(1,pGrid.size)),(2,1))
- ExpVnorm_P = np.exp(Vnorm_P/pref_shock_mag)
- ExpVnorm_P = np.exp(old_div(Vnorm_P,pref_shock_mag)) SumExpVnorm_P = np.sum(ExpVnorm_P,axis=0) V_P = np.log(SumExpVnorm_P)*pref_shock_mag + Vbest_P
- switch_P = ExpVnorm_P[0,:]/SumExpVnorm_P
-
switch_P = old_div(ExpVnorm_P[0,:],SumExpVnorm_P)
Vboth_J = np.vstack((V_J2J,V_J2P)) Vbest_J = np.max(Vboth_J,axis=0) Vnorm_J = Vboth_J - np.tile(np.reshape(Vbest_J,(1,pGrid.size)),(2,1))
- ExpVnorm_J = np.exp(Vnorm_J/pref_shock_mag)
- ExpVnorm_J = np.exp(old_div(Vnorm_J,pref_shock_mag)) SumExpVnorm_J = np.sum(ExpVnorm_J,axis=0) V_J = np.log(SumExpVnorm_J)*pref_shock_mag + Vbest_J
- switch_J = ExpVnorm_J[1,:]/SumExpVnorm_J
-
switch_J = old_div(ExpVnorm_J[1,:],SumExpVnorm_J)
VfuncPunkNow = LinearInterp(pGrid,V_P)
--- ./HARK/ConsumptionSaving/Demos/Chinese_Growth.py (original) +++ ./HARK/ConsumptionSaving/Demos/Chinese_Growth.py (refactored) @@ -33,10 +33,14 @@ HARK's MarkovConsumerType will be a very convient way to run this experiment. So we need to prepare the parameters to create that ConsumerType, and then create it. """ +from future import division
+from builtins import str +from builtins import range +from past.utils import old_div import HARK.cstwMPC.SetupParamsCSTW as cstwParams
@@ -49,7 +53,7 @@
StateCount = 2 #number of Markov states -ProbGrowthEnds = (1./160.) #probability agents assign to the high-growth state ending +ProbGrowthEnds = (old_div(1.,160.)) #probability agents assign to the high-growth state ending MrkvArray = np.array(1.,0.],[ProbGrowthEnds,1.-ProbGrowthEnds) #Markov array init_China_parameters['MrkvArray'] = [MrkvArray] #assign the Markov array as a parameter
@@ -245,7 +249,7 @@
# After looping through all the ConsumerTypes, calculate and return the path of the national
# saving rate
- NatlSavingRate = (NatlIncome - NatlCons)/NatlIncome
-
NatlSavingRate = old_div((NatlIncome - NatlCons),NatlIncome)
return NatlSavingRate
--- ./HARK/ConsumptionSaving/Demos/MPC_credit_vs_MPC_income.py (original) +++ ./HARK/ConsumptionSaving/Demos/MPC_credit_vs_MPC_income.py (refactored) @@ -20,10 +20,13 @@ """ The first step is to create the ConsumerType we want to solve the model for. """ +from future import division +from future import print_function
+from past.utils import old_div from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType
@@ -159,15 +162,15 @@ income_change = credit_change
# Now, calculate the approximate MPC out of income
- return (BaselineExample.solution[0].cFunc(x + income_change) -
-
BaselineExample.solution[0].cFunc(x)) / income_change
- return old_div((BaselineExample.solution[0].cFunc(x + income_change) -
-
BaselineExample.solution[0].cFunc(x)), income_change)
def FirstDiffMPC_Credit(x): # Approximate the MPC out of credit by plotting how much more of the increased credit the agent # with higher credit spends
- return (XtraCreditExample.solution[0].cFunc(x) -
-
BaselineExample.solution[0].cFunc(x)) / credit_change
- return old_div((XtraCreditExample.solution[0].cFunc(x) -
-
BaselineExample.solution[0].cFunc(x)), credit_change)
--- ./HARK/ConsumptionSaving/Demos/Fagereng_demo.py (original) +++ ./HARK/ConsumptionSaving/Demos/Fagereng_demo.py (refactored) @@ -32,7 +32,12 @@ from a transitory shock could exceed 1. Option is included here because this target tends to push the estimate around a bit. ''' +from future import division +from future import print_function
+from builtins import str +from builtins import range +from past.utils import old_div import numpy as np from copy import deepcopy
@@ -62,7 +67,7 @@
base_params = deepcopy(init_infinite) base_params['LivPrb'] = [0.975] -base_params['Rfree'] = 1.04/base_params['LivPrb'][0] +base_params['Rfree'] = old_div(1.04,base_params['LivPrb'][0]) base_params['PermShkStd'] = [0.1] base_params['TranShkStd'] = [0.1] base_params['T_age'] = T_kill # Kill off agents if they manage to achieve T_kill working years @@ -129,12 +134,12 @@ MPC_this_type = np.zeros((ThisType.AgentCount,4)) for k in range(4): # Get MPC for all agents of this type Llvl = lottery_size[k]
-
Lnrm = Llvl/ThisType.pLvlNow
-
Lnrm = old_div(Llvl,ThisType.pLvlNow) if do_secant:
-
SplurgeNrm = Splurge/ThisType.pLvlNow
-
SplurgeNrm = old_div(Splurge,ThisType.pLvlNow) mAdj = ThisType.mNrmNow + Lnrm - SplurgeNrm cAdj = ThisType.cFunc[0](mAdj) + SplurgeNrm
-
MPC_this_type[:,k] = (cAdj - c_base)/Lnrm
-
MPC_this_type[:,k] = old_div((cAdj - c_base),Lnrm) else: mAdj = ThisType.mNrmNow + Lnrm MPC_this_type[:,k] = cAdj = ThisType.cFunc[0].derivative(mAdj)
@@ -160,7 +165,7 @@ if verbose: print(simulated_MPC_means) else:
-
print (center, spread, distance)
-
return distance
print((center, spread, distance))
--- ./HARK/ConsumptionSaving/ConsMedModel.py (original) +++ ./HARK/ConsumptionSaving/ConsMedModel.py (refactored) @@ -1,7 +1,13 @@ ''' Consumption-saving models that also include medical spending. '''
+from future import division
+from future import print_function
+from future import absolute_import
+
+from builtins import str
+from builtins import range
+from past.utils import old_div
import numpy as np
from scipy.optimize import brentq
from HARK import HARKobject
@@ -9,11 +15,11 @@
CRRAutility, CRRAutility_inv, CRRAutility_invP, CRRAutilityPP,
makeGridExpMult, NullFunc
from HARK.simulation import drawLognormal
-from ConsIndShockModel import ConsumerSolution
+from .ConsIndShockModel import ConsumerSolution
from HARK.interpolation import BilinearInterpOnInterp1D, TrilinearInterp, BilinearInterp, CubicInterp,
LinearInterp, LowerEnvelope3D, UpperEnvelope, LinearInterpOnInterp1D,
VariableLowerBoundFunc3D
-from ConsGenIncProcessModel import ConsGenIncProcessSolver, PersistentShockConsumerType,
+from .ConsGenIncProcessModel import ConsGenIncProcessSolver, PersistentShockConsumerType,
ValueFunc2D, MargValueFunc2D, MargMargValueFunc2D,
VariableLowerBoundFunc2D
from copy import deepcopy
@@ -79,23 +85,23 @@
elif MedShk == 0: # All consumption when MedShk = 0
cLvl = xLvl
else:
-
optMedZeroFunc = lambda c : (MedShk/MedPrice)**(-1.0/CRRAcon)*\
-
((xLvl-c)/MedPrice)**(CRRAmed/CRRAcon) - c
-
optMedZeroFunc = lambda c : (old_div(MedShk,MedPrice))**(old_div(-1.0,CRRAcon))*\
-
(old_div((xLvl-c),MedPrice))**(old_div(CRRAmed,CRRAcon)) - c cLvl = brentq(optMedZeroFunc,0.0,xLvl) # Find solution to FOC cLvlGrid[i,j] = cLvl # Construct the consumption function and medical care function if xLvlCubicBool: if MedShkCubicBool:
-
raise NotImplementedError(), 'Bicubic interpolation not yet implemented'
-
raise NotImplementedError()('Bicubic interpolation not yet implemented') else: xLvlGrid_tiled = np.tile(np.reshape(xLvlGrid,(xLvlGrid.size,1)), (1,MedShkGrid.size)) MedShkGrid_tiled = np.tile(np.reshape(MedShkGrid,(1,MedShkGrid.size)), (xLvlGrid.size,1))
-
dfdx = (CRRAmed/(CRRAcon*MedPrice))*(MedShkGrid_tiled/MedPrice)**(-1.0/CRRAcon)*\
-
((xLvlGrid_tiled - cLvlGrid)/MedPrice)**(CRRAmed/CRRAcon - 1.0)
-
dcdx = dfdx/(dfdx + 1.0)
-
dfdx = (old_div(CRRAmed,(CRRAcon*MedPrice)))*(old_div(MedShkGrid_tiled,MedPrice))**(old_div(-1.0,CRRAcon))*\
-
(old_div((xLvlGrid_tiled - cLvlGrid),MedPrice))**(old_div(CRRAmed,CRRAcon) - 1.0)
-
dcdx = old_div(dfdx,(dfdx + 1.0)) dcdx[0,:] = dcdx[1,:] # approximation; function goes crazy otherwise dcdx[:,0] = 1.0 # no Med when MedShk=0, so all x is c cFromxFunc_by_MedShk = []
@@ -129,7 +135,7 @@ ''' xLvl = self.xFunc(mLvl,pLvl,MedShk) cLvl = self.cFunc(xLvl,MedShk)
-
Med = (xLvl-cLvl)/self.MedPrice
-
Med = old_div((xLvl-cLvl),self.MedPrice) return cLvl,Med
def derivativeX(self,mLvl,pLvl,MedShk): @@ -160,7 +166,7 @@ dxdm = self.xFunc.derivativeX(mLvl,pLvl,MedShk) dcdx = self.cFunc.derivativeX(xLvl,MedShk) dcdm = dxdm*dcdx
-
dMeddm = (dxdm - dcdm)/self.MedPrice
-
dMeddm = old_div((dxdm - dcdm),self.MedPrice) return dcdm,dMeddm
def derivativeY(self,mLvl,pLvl,MedShk): @@ -191,7 +197,7 @@ dxdp = self.xFunc.derivativeY(mLvl,pLvl,MedShk) dcdx = self.cFunc.derivativeX(xLvl,MedShk) dcdp = dxdp*dcdx
-
dMeddp = (dxdp - dcdp)/self.MedPrice
-
dMeddp = old_div((dxdp - dcdp),self.MedPrice) return dcdp,dMeddp
def derivativeZ(self,mLvl,pLvl,MedShk): @@ -222,7 +228,7 @@ dxdShk = self.xFunc.derivativeZ(mLvl,pLvl,MedShk) dcdx = self.cFunc.derivativeX(xLvl,MedShk) dcdShk = dxdShk*dcdx + self.cFunc.derivativeY(xLvl,MedShk)
-
dMeddShk = (dxdShk - dcdShk)/self.MedPrice
-
dMeddShk = old_div((dxdShk - dcdShk),self.MedPrice) return dcdShk,dMeddShk
@@ -407,7 +413,7 @@ Optimal medical care for each point in (xLvl,MedShk). ''' xLvl = self.xFunc(mLvl,pLvl,MedShk)
-
Med = (xLvl-self.cFunc(xLvl,MedShk))/self.MedPrice
-
Med = old_div((xLvl-self.cFunc(xLvl,MedShk)),self.MedPrice) return Med
def derivativeX(self,mLvl,pLvl,MedShk): @@ -438,7 +444,7 @@ dxdm = self.xFunc.derivativeX(mLvl,pLvl,MedShk) dcdx = self.cFunc.derivativeX(xLvl,MedShk) dcdm = dxdm*dcdx
-
dMeddm = (dxdm - dcdm)/self.MedPrice
-
dMeddm = old_div((dxdm - dcdm),self.MedPrice) return dcdm,dMeddm
def derivativeY(self,mLvl,pLvl,MedShk): @@ -464,7 +470,7 @@ ''' xLvl = self.xFunc(mLvl,pLvl,MedShk) dxdp = self.xFunc.derivativeY(mLvl,pLvl,MedShk)
-
dMeddp = (dxdp - dxdp*self.cFunc.derivativeX(xLvl,MedShk))/self.MedPrice
-
dMeddp = old_div((dxdp - dxdp*self.cFunc.derivativeX(xLvl,MedShk)),self.MedPrice) return dMeddp
def derivativeZ(self,mLvl,pLvl,MedShk): @@ -492,7 +498,7 @@ dxdShk = self.xFunc.derivativeZ(mLvl,pLvl,MedShk) dcdx = self.cFunc.derivativeX(xLvl,MedShk) dcdShk = dxdShk*dcdx + self.cFunc.derivativeY(xLvl,MedShk)
-
dMeddShk = (dxdShk - dcdShk)/self.MedPrice
-
dMeddShk = old_div((dxdShk - dcdShk),self.MedPrice) return dMeddShk
@@ -629,7 +635,7 @@ vP_expected = np.sum(vPgrid*PrbGrid,axis=1)
# Construct the marginal (marginal) value function for the terminal period
-
vPnvrs = vP_expected**(-1.0/self.CRRA)
-
vPnvrs = vP_expected**(old_div(-1.0,self.CRRA)) vPnvrs[0] = 0.0 vPnvrsFunc = BilinearInterp(np.tile(np.reshape(vPnvrs,(vPnvrs.size,1)), (1,trivial_grid.size)),mLvlGrid,trivial_grid)
@@ -892,7 +898,7 @@ # Find minimum allowable end-of-period assets at each permanent income level PermIncMinNext = self.PermShkMinNextself.pLvlNextFunc(self.pLvlGrid) IncLvlMinNext = PermIncMinNextself.TranShkMinNext
-
aLvlMin = (self.solution_next.mLvlMin(PermIncMinNext) - IncLvlMinNext)/self.Rfree
-
aLvlMin = old_div((self.solution_next.mLvlMin(PermIncMinNext) - IncLvlMinNext),self.Rfree) # Make a function for the natural borrowing constraint by permanent income BoroCnstNat = LinearInterp(np.insert(self.pLvlGrid,0,0.0),np.insert(aLvlMin,0,0.0))
@@ -944,7 +950,7 @@ cLvlNow = np.tile(np.reshape(self.uPinv(EndOfPrdvP),(1,pCount,mCount)),(MedCount,1,1)) MedBaseNow = np.tile(np.reshape(self.uMedPinv(self.MedPrice*EndOfPrdvP),(1,pCount,mCount)), (MedCount,1,1))
-
MedShkVals_tiled = np.tile(np.reshape(self.MedShkVals**(1.0/self.CRRAmed),(MedCount,1,1)),
-
MedShkVals_tiled = np.tile(np.reshape(self.MedShkVals**(old_div(1.0,self.CRRAmed)),(MedCount,1,1)), (1,pCount,mCount)) MedLvlNow = MedShkVals_tiled*MedBaseNow aLvlNow_tiled = np.tile(np.reshape(aLvlNow,(1,pCount,mCount)),(MedCount,1,1))
@@ -1175,11 +1181,11 @@
EndOfPrdvPP = self.DiscFacEffself.Rfreeself.Rfree*np.sum(self.vPPfuncNext(self.mLvlNext,
self.pLvlNext)*self.ShkPrbs_temp,axis=0)
EndOfPrdvPP = np.tile(np.reshape(EndOfPrdvPP,(1,pCount,EndOfPrdvPP.shape[1])),(MedCount,1,1))
-
dcda = EndOfPrdvPP/self.uPP(np.array(self.cLvlNow))
-
dMedda = EndOfPrdvPP/(self.MedShkVals_tiled*self.uMedPP(self.MedLvlNow))
-
dcda = old_div(EndOfPrdvPP,self.uPP(np.array(self.cLvlNow)))
-
dMedda = old_div(EndOfPrdvPP,(self.MedShkVals_tiled*self.uMedPP(self.MedLvlNow))) dMedda[0,:,:] = 0.0 # dMedda goes crazy when MedShk=0
-
MPC = dcda/(1.0 + dcda + self.MedPrice*dMedda)
-
MPM = dMedda/(1.0 + dcda + self.MedPrice*dMedda)
-
MPC = old_div(dcda,(1.0 + dcda + self.MedPrice*dMedda))
-
MPM = old_div(dMedda,(1.0 + dcda + self.MedPrice*dMedda)) # Convert to marginal propensity to spend MPX = MPC + self.MedPrice*MPM
@@ -1356,7 +1362,7 @@ ###############################################################################
if name == 'main':
- import ConsumerParameters as Params
- from . import ConsumerParameters as Params from HARK.utilities import CRRAutility_inv from time import clock import matplotlib.pyplot as plt @@ -1407,7 +1413,7 @@ pLvl = MedicalExample.pLvlGrid[0][p] M_temp = pLvlM + MedicalExample.solution[0].mLvlMin(pLvl) P = pLvlnp.ones_like(M)
-
vP = MedicalExample.solution[0].vPfunc(M_temp,P)**(-1.0/MedicalExample.CRRA)
-
print('Marginal value function (pseudo inverse)') plt.show()
vP = MedicalExample.solution[0].vPfunc(M_temp,P)**(old_div(-1.0,MedicalExample.CRRA)) plt.plot(M_temp,vP)
--- ./HARK/ConsumptionSaving/ConsPrefShockModel.py (original) +++ ./HARK/ConsumptionSaving/ConsPrefShockModel.py (refactored) @@ -6,10 +6,16 @@ 2) A combination of (1) and ConsKinkedR, demonstrating how to construct a new model by inheriting from multiple classes. '''
+from future import division
+from future import print_function
+from future import absolute_import
+
+from builtins import str
+from builtins import range
+from past.utils import old_div
import numpy as np
from HARK.utilities import approxMeanOneLognormal
-from ConsIndShockModel import IndShockConsumerType, ConsumerSolution, ConsIndShockSolver,
+from .ConsIndShockModel import IndShockConsumerType, ConsumerSolution, ConsIndShockSolver,
ValueFunc, MargValueFunc, KinkedRconsumerType, ConsKinkedRsolver
from HARK.interpolation import LinearInterpOnInterp1D, LinearInterp, CubicInterp, LowerEnvelope
@@ -291,7 +297,7 @@ ''' c_base = self.uPinv(EndOfPrdvP) PrefShkCount = self.PrefShkVals.size
-
PrefShk_temp = np.tile(np.reshape(self.PrefShkVals**(1.0/self.CRRA),(PrefShkCount,1)),
-
PrefShk_temp = np.tile(np.reshape(self.PrefShkVals**(old_div(1.0,self.CRRA)),(PrefShkCount,1)), (1,c_base.size)) self.cNrmNow = np.tile(c_base,(PrefShkCount,1))*PrefShk_temp self.mNrmNow = self.cNrmNow + np.tile(aNrmNow,(PrefShkCount,1))
@@ -326,7 +332,7 @@ PrefShkCount = self.PrefShkVals.size cFunc_list = [] for j in range(PrefShkCount):
-
MPCmin_j = self.MPCminNow*self.PrefShkVals[j]**(1.0/self.CRRA)
-
MPCmin_j = self.MPCminNow*self.PrefShkVals[j]**(old_div(1.0,self.CRRA)) cFunc_this_shock = LowerEnvelope(LinearInterp(mNrm[j,:],cNrm[j,:], intercept_limit=self.hNrmNow*MPCmin_j, slope_limit=MPCmin_j),self.cFuncNowCnst)
@@ -382,8 +388,8 @@ vNvrsP = vPnow*self.uinvP(vNrmNow) mNrm_temp = np.insert(mNrm_temp,0,self.mNrmMinNow) vNvrs = np.insert(vNvrs,0,0.0)
-
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff**(-self.CRRA/(1.0-self.CRRA)))
-
MPCminNvrs = self.MPCminNow**(-self.CRRA/(1.0-self.CRRA))
-
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff**(old_div(-self.CRRA,(1.0-self.CRRA))))
-
MPCminNvrs = self.MPCminNow**(old_div(-self.CRRA,(1.0-self.CRRA))) vNvrsFuncNow = CubicInterp(mNrm_temp,vNvrs,vNvrsP,MPCminNvrs*self.hNrmNow,MPCminNvrs) vFuncNow = ValueFunc(vNvrsFuncNow,self.CRRA) return vFuncNow
@@ -591,7 +597,7 @@ ###############################################################################
if name == 'main':
- import ConsumerParameters as Params
- from . import ConsumerParameters as Params import matplotlib.pyplot as plt from HARK.utilities import plotFuncs from time import clock
--- ./HARK/ConsumptionSaving/RepAgentModel.py (original) +++ ./HARK/ConsumptionSaving/RepAgentModel.py (refactored) @@ -4,11 +4,17 @@ take a heterogeneous agents approach. In these models, all attributes are either time invariant or exist on a short cycle. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import str +from builtins import range +from past.utils import old_div import numpy as np from HARK.interpolation import LinearInterp from HARK.simulation import drawUniform, drawDiscrete -from ConsIndShockModel import IndShockConsumerType, ConsumerSolution, MargValueFunc +from .ConsIndShockModel import IndShockConsumerType, ConsumerSolution, MargValueFunc
def solveConsRepAgent(solution_next,DiscFac,CRRA,IncomeDstn,CapShare,DeprFac,PermGroFac,aXtraGrid): ''' @@ -62,10 +68,10 @@
# Calculate next period's capital-to-permanent-labor ratio under each combination
# of end-of-period assets and shock realization
- kNrmNext = aNrm_tiled/(PermGroFac*PermShkVals_tiled)
-
kNrmNext = old_div(aNrm_tiled,(PermGroFac*PermShkVals_tiled))
- KtoLnext = kNrmNext/TranShkVals_tiled
-
KtoLnext = old_div(kNrmNext,TranShkVals_tiled) RfreeNext = 1. - DeprFac + CapShareKtoLnext**(CapShare-1.) wRteNext = (1.-CapShare)KtoLnext**CapShare mNrmNext = RfreeNextkNrmNext + wRteNextTranShkVals_tiled @@ -75,7 +81,7 @@ EndOfPrdvP = DiscFacnp.sum(RfreeNext(PermGroFac*PermShkVals_tiled)**(-CRRA)vPnextShkPrbs_tiled,axis=1)
- cNrmNow = EndOfPrdvP**(-1./CRRA)
-
cNrmNow = EndOfPrdvP**(old_div(-1.,CRRA)) mNrmNow = aNrmNow + cNrmNow
@@ -150,10 +156,10 @@
# Calculate next period's capital-to-permanent-labor ratio under each combination
# of end-of-period assets and shock realization
-
kNrmNext = aNrm_tiled/(PermGroFac[j]*PermShkVals_tiled)
-
kNrmNext = old_div(aNrm_tiled,(PermGroFac[j]*PermShkVals_tiled)) # Calculate next period's market resources
-
KtoLnext = kNrmNext/TranShkVals_tiled
-
KtoLnext = old_div(kNrmNext,TranShkVals_tiled) RfreeNext = 1. - DeprFac + CapShare*KtoLnext**(CapShare-1.) wRteNext = (1.-CapShare)*KtoLnext**CapShare mNrmNext = RfreeNext*kNrmNext + wRteNext*TranShkVals_tiled
@@ -170,7 +176,7 @@ vPfuncNow_list = [] for i in range(StateCount): # Invert the first order condition to get consumption, then find endogenous gridpoints
-
cNrmNow = EndOfPrdvP[i,:]**(-1./CRRA)
-
cNrmNow = EndOfPrdvP[i,:]**(old_div(-1.,CRRA)) mNrmNow = aNrmNow + cNrmNow # Construct the consumption function and the marginal value function
@@ -225,7 +231,7 @@
# Calculate new states: normalized market resources and permanent income level
self.pLvlNow = pLvlPrev*self.PermShkNow # Updated permanent income level
-
self.kNrmNow = aNrmPrev/self.PermShkNow
-
self.kNrmNow = old_div(aNrmPrev,self.PermShkNow) self.yNrmNow = self.kNrmNow**self.CapShare*self.TranShkNow**(1.-self.CapShare) self.Rfree = 1. + self.CapShare*self.kNrmNow**(self.CapShare-1.)*self.TranShkNow**(1.-self.CapShare) - self.DeprFac self.wRte = (1.-self.CapShare)*self.kNrmNow**self.CapShare*self.TranShkNow**(-self.CapShare)
@@ -328,7 +334,7 @@ from copy import deepcopy from time import clock from HARK.utilities import plotFuncs
- import ConsumerParameters as Params
-
from . import ConsumerParameters as Params
RA_params = deepcopy(Params.init_idiosyncratic_shocks)
--- ./HARK/ConsumptionSaving/ConsGenIncProcessModel.py (original) +++ ./HARK/ConsumptionSaving/ConsGenIncProcessModel.py (refactored) @@ -4,7 +4,13 @@ ConsIndShockModel by explicitly tracking persistent income as a state variable, and allows (log) persistent income to follow an AR1 process rather than random walk. '''
+from future import division
+from future import print_function
+from future import absolute_import
+
+from builtins import str
+from builtins import range
+from past.utils import old_div
from copy import deepcopy
import numpy as np
from HARK import HARKobject
@@ -14,7 +20,7 @@
CRRAutility_invP, CRRAutility_inv, CRRAutilityP_invP,
getPercentiles
from HARK.simulation import drawLognormal, drawDiscrete, drawUniform
-from ConsIndShockModel import ConsIndShockSetup, ConsumerSolution, IndShockConsumerType
+from .ConsIndShockModel import ConsIndShockSetup, ConsumerSolution, IndShockConsumerType
utility = CRRAutility utilityP = CRRAutilityP @@ -419,7 +425,7 @@ pLvlNext_temp = np.tile(np.reshape(self.pLvlNextFunc(self.pLvlGrid),(pLvlCount,1)),(1,ShkCount))*PermShkVals_temp
# Find the natural borrowing constraint for each persistent income level
-
aLvlMin_candidates = (self.mLvlMinNext(pLvlNext_temp) - TranShkVals_temp*pLvlNext_temp)/self.Rfree
-
aLvlMin_candidates = old_div((self.mLvlMinNext(pLvlNext_temp) - TranShkVals_temp*pLvlNext_temp),self.Rfree) aLvlMinNow = np.max(aLvlMin_candidates,axis=1) self.BoroCnstNat = LinearInterp(np.insert(self.pLvlGrid,0,0.0),np.insert(aLvlMinNow,0,0.0))
@@ -662,7 +668,7 @@ vNvrsP = np.concatenate((np.reshape(vNvrsP[0,:],(1,vNvrsP.shape[1])),vNvrsP),axis=0)
# Add data at the lower bound of p
-
MPCminNvrs = self.MPCminNow**(-self.CRRA/(1.0-self.CRRA))
-
MPCminNvrs = self.MPCminNow**(old_div(-self.CRRA,(1.0-self.CRRA))) m_temp = np.reshape(mLvl_temp[:,0],(mSize+1,1)) mLvl_temp = np.concatenate((m_temp,mLvl_temp),axis=1) vNvrs = np.concatenate((MPCminNvrs*m_temp,vNvrs),axis=1)
@@ -766,8 +772,8 @@ ''' # Calculate the MPC at each gridpoint EndOfPrdvPP = self.DiscFacEffself.Rfreeself.Rfree*np.sum(self.vPPfuncNext(self.mLvlNext,self.pLvlNext)*self.ShkPrbs_temp,axis=0)
-
dcda = EndOfPrdvPP/self.uPP(np.array(cLvl[1:,1:]))
-
MPC = dcda/(dcda+1.)
-
dcda = old_div(EndOfPrdvPP,self.uPP(np.array(cLvl[1:,1:])))
-
MPC = old_div(dcda,(dcda+1.)) MPC = np.concatenate((np.reshape(MPC[:,0],(MPC.shape[0],1)),MPC),axis=1) # Stick an extra MPC value at bottom; MPCmax doesn't work MPC = np.concatenate((self.MPCminNow*np.ones((1,self.aXtraGrid.size+1)),MPC),axis=0)
@@ -1273,7 +1279,7 @@ ###############################################################################
if name == 'main':
- import ConsumerParameters as Params
- from . import ConsumerParameters as Params from HARK.utilities import plotFuncs from time import clock import matplotlib.pyplot as plt @@ -1323,7 +1329,7 @@ for p in pLvlGrid: M_temp = mNrmGridp + ExplicitExample.solution[0].mLvlMin(p) C = ExplicitExample.solution[0].cFunc(M_temp,pnp.ones_like(M_temp))
-
plt.plot(M_temp/p,C/p)
-
plt.xlim(0.,20.) plt.xlabel('Normalized market resources mNrm') plt.ylabel('Normalized consumption cNrm')
plt.plot(old_div(M_temp,p),old_div(C,p))
--- ./HARK/ConsumptionSaving/ConsRepAgentModel.py (original) +++ ./HARK/ConsumptionSaving/ConsRepAgentModel.py (refactored) @@ -4,11 +4,17 @@ take a heterogeneous agents approach. In RA models, all attributes are either time invariant or exist on a short cycle; models must be infinite horizon. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import str +from builtins import range +from past.utils import old_div import numpy as np from HARK.interpolation import LinearInterp from HARK.simulation import drawUniform, drawDiscrete -from ConsIndShockModel import IndShockConsumerType, ConsumerSolution, MargValueFunc +from .ConsIndShockModel import IndShockConsumerType, ConsumerSolution, MargValueFunc
def solveConsRepAgent(solution_next,DiscFac,CRRA,IncomeDstn,CapShare,DeprFac,PermGroFac,aXtraGrid): ''' @@ -62,10 +68,10 @@
# Calculate next period's capital-to-permanent-labor ratio under each combination
# of end-of-period assets and shock realization
- kNrmNext = aNrm_tiled/(PermGroFac*PermShkVals_tiled)
-
kNrmNext = old_div(aNrm_tiled,(PermGroFac*PermShkVals_tiled))
- KtoLnext = kNrmNext/TranShkVals_tiled
-
KtoLnext = old_div(kNrmNext,TranShkVals_tiled) RfreeNext = 1. - DeprFac + CapShareKtoLnext**(CapShare-1.) wRteNext = (1.-CapShare)KtoLnext**CapShare mNrmNext = RfreeNextkNrmNext + wRteNextTranShkVals_tiled @@ -75,7 +81,7 @@ EndOfPrdvP = DiscFacnp.sum(RfreeNext(PermGroFac*PermShkVals_tiled)**(-CRRA)vPnextShkPrbs_tiled,axis=1)
- cNrmNow = EndOfPrdvP**(-1./CRRA)
-
cNrmNow = EndOfPrdvP**(old_div(-1.,CRRA)) mNrmNow = aNrmNow + cNrmNow
@@ -150,10 +156,10 @@
# Calculate next period's capital-to-permanent-labor ratio under each combination
# of end-of-period assets and shock realization
-
kNrmNext = aNrm_tiled/(PermGroFac[j]*PermShkVals_tiled)
-
kNrmNext = old_div(aNrm_tiled,(PermGroFac[j]*PermShkVals_tiled)) # Calculate next period's market resources
-
KtoLnext = kNrmNext/TranShkVals_tiled
-
KtoLnext = old_div(kNrmNext,TranShkVals_tiled) RfreeNext = 1. - DeprFac + CapShare*KtoLnext**(CapShare-1.) wRteNext = (1.-CapShare)*KtoLnext**CapShare mNrmNext = RfreeNext*kNrmNext + wRteNext*TranShkVals_tiled
@@ -170,7 +176,7 @@ vPfuncNow_list = [] for i in range(StateCount): # Invert the first order condition to get consumption, then find endogenous gridpoints
-
cNrmNow = EndOfPrdvP[i,:]**(-1./CRRA)
-
cNrmNow = EndOfPrdvP[i,:]**(old_div(-1.,CRRA)) mNrmNow = aNrmNow + cNrmNow # Construct the consumption function and the marginal value function
@@ -225,7 +231,7 @@
# Calculate new states: normalized market resources and permanent income level
self.pLvlNow = pLvlPrev*self.PermShkNow # Same as in IndShockConsType
-
self.kNrmNow = aNrmPrev/self.PermShkNow
-
self.kNrmNow = old_div(aNrmPrev,self.PermShkNow) self.yNrmNow = self.kNrmNow**self.CapShare*self.TranShkNow**(1.-self.CapShare) self.Rfree = 1. + self.CapShare*self.kNrmNow**(self.CapShare-1.)*self.TranShkNow**(1.-self.CapShare) - self.DeprFac self.wRte = (1.-self.CapShare)*self.kNrmNow**self.CapShare*self.TranShkNow**(-self.CapShare)
@@ -328,7 +334,7 @@ from copy import deepcopy from time import clock from HARK.utilities import plotFuncs
- import ConsumerParameters as Params
-
from . import ConsumerParameters as Params
RA_params = deepcopy(Params.init_idiosyncratic_shocks)
--- ./HARK/ConsumptionSaving/ConsMarkovModel.py (original) +++ ./HARK/ConsumptionSaving/ConsMarkovModel.py (refactored) @@ -4,11 +4,16 @@ include a Markov state; the interest factor, permanent growth factor, and income distribution can vary with the discrete state. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import range +from past.utils import old_div from copy import deepcopy import numpy as np -from ConsIndShockModel import ConsIndShockSolver, ValueFunc, MargValueFunc, ConsumerSolution, IndShockConsumerType -from ConsAggShockModel import AggShockConsumerType +from .ConsIndShockModel import ConsIndShockSolver, ValueFunc, MargValueFunc, ConsumerSolution, IndShockConsumerType +from .ConsAggShockModel import AggShockConsumerType from HARK.utilities import combineIndepDstns, warnings # Because of "patch" to warnings modules from HARK import Market, HARKobject from HARK.simulation import drawDiscrete, drawUniform @@ -405,22 +410,22 @@ (1,self.StateCount)),(self.StateCount,1)) temp_array = self.MrkvArray*WorstIncPrb_array WorstIncPrbNow = np.sum(temp_array,axis=1) # Probability of getting the "worst" income shock and transition from each current state
-
ExMPCmaxNext = (np.dot(temp_array,self.Rfree_list**(1.0-self.CRRA)*
-
self.solution_next.MPCmax**(-self.CRRA))/WorstIncPrbNow)**\
-
(-1.0/self.CRRA)
-
ExMPCmaxNext = (old_div(np.dot(temp_array,self.Rfree_list**(1.0-self.CRRA)*
-
self.solution_next.MPCmax**(-self.CRRA)),WorstIncPrbNow))**\
-
(old_div(-1.0,self.CRRA)) DiscFacEff_temp = self.DiscFac*self.LivPrb
-
self.MPCmaxNow = 1.0/(1.0 + ((DiscFacEff_temp*WorstIncPrbNow)**
-
(1.0/self.CRRA))/ExMPCmaxNext)
-
self.MPCmaxNow = old_div(1.0,(1.0 + old_div(((DiscFacEff_temp*WorstIncPrbNow)**
-
(old_div(1.0,self.CRRA))),ExMPCmaxNext))) self.MPCmaxEff = self.MPCmaxNow self.MPCmaxEff[self.BoroCnstNat_list < self.mNrmMin_list] = 1.0 # State-conditional PDV of human wealth hNrmPlusIncNext = self.ExIncNextAll + self.solution_next.hNrm
-
self.hNrmNow = np.dot(self.MrkvArray,(self.PermGroFac_list/self.Rfree_list)*
-
self.hNrmNow = np.dot(self.MrkvArray,(old_div(self.PermGroFac_list,self.Rfree_list))* hNrmPlusIncNext) # Lower bound on MPC as m gets arbitrarily large temp = (DiscFacEff_temp*np.dot(self.MrkvArray,self.solution_next.MPCmin**
-
(-self.CRRA)*self.Rfree_list**(1.0-self.CRRA)))**(1.0/self.CRRA)
-
self.MPCminNow = 1.0/(1.0 + temp)
-
(-self.CRRA)*self.Rfree_list**(1.0-self.CRRA)))**(old_div(1.0,self.CRRA))
-
self.MPCminNow = old_div(1.0,(1.0 + temp))
def makeSolution(self,cNrm,mNrm): ''' @@ -451,8 +456,8 @@ solution = ConsumerSolution() # An empty solution to which we'll add state-conditional solutions # Calculate the MPC at each market resource gridpoint in each state (if desired) if self.CubicBool:
-
dcda = self.EndOfPrdvPP/self.uPP(np.array(self.cNrmNow))
-
MPC = dcda/(dcda+1.0)
-
dcda = old_div(self.EndOfPrdvPP,self.uPP(np.array(self.cNrmNow)))
-
MPC = old_div(dcda,(dcda+1.0)) self.MPC_temp = np.hstack((np.reshape(self.MPCmaxNow,(self.StateCount,1)),MPC)) interpfunc = self.makeCubiccFunc else:
@@ -578,8 +583,8 @@ vNvrsP = vPnow*self.uinvP(vNrmNow) mNrm_temp = np.insert(mGrid,0,mNrmMin) # add the lower bound vNvrs = np.insert(vNvrs,0,0.0)
-
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff[i]**(-self.CRRA/(1.0-self.CRRA)))
-
MPCminNvrs = self.MPCminNow[i]**(-self.CRRA/(1.0-self.CRRA))
-
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff[i]**(old_div(-self.CRRA,(1.0-self.CRRA))))
-
MPCminNvrs = self.MPCminNow[i]**(old_div(-self.CRRA,(1.0-self.CRRA))) vNvrsFunc_i = CubicInterp(mNrm_temp,vNvrs,vNvrsP,MPCminNvrs*self.hNrmNow[i],MPCminNvrs) # "Recurve" the decurved value function and add it to the list
@@ -829,7 +834,7 @@ if self.global_markov: base_draws = np.ones(self.AgentCount)*drawUniform(1,seed=self.RNG.randint(0,2**31-1)) else:
-
base_draws = self.RNG.permutation(np.arange(self.AgentCount,dtype=float)/self.AgentCount + 1.0/(2*self.AgentCount))
-
base_draws = self.RNG.permutation(old_div(np.arange(self.AgentCount,dtype=float),self.AgentCount) + old_div(1.0,(2*self.AgentCount))) newborn = self.t_age == 0 # Don't change Markov state for those who were just born (unless global_markov) MrkvPrev = self.MrkvNow MrkvNow = np.zeros(self.AgentCount,dtype=int)
@@ -987,7 +992,7 @@ this_mergedIncomeDstn2 = combineIndepDstns([this_IncomeDstn[0],this_IncomeDstn[2]],TranShkAggDstn) this_mergedIncomeDstn[2] = this_mergedIncomeDstn[2]*this_mergedIncomeDstn2[1] #multiply idiosyncratic and aggregate transitive shocks together merged_IncomeDstn.append(this_mergedIncomeDstn)
- Rfree_eff = Rfree/LivPrb[0] #To account for the division of assets of those who die
- Rfree_eff = old_div(Rfree,LivPrb[0]) #To account for the division of assets of those who die solution_now = solveConsMarkov(solution_next,merged_IncomeDstn,LivPrb,DiscFac,CRRA,Rfree_eff,PermGroFac,MrkvArray,BoroCnstArt,aXtraGrid,vFuncBool,CubicBool) return solution_now
@@ -1021,7 +1026,7 @@ ''' self.initializeSim() self.aLvlNow = self.aInit*np.ones(self.AgentCount) # Start simulation near SS
-
self.aNrmNow = self.aLvlNow/self.pLvlNow
-
self.aNrmNow = old_div(self.aLvlNow,self.pLvlNow)
def simBirth(self,which_agents): ''' @@ -1063,7 +1068,7 @@ who_lives = np.logical_not(which_agents) wealth_living = np.sum(self.aLvlNow[who_lives]) wealth_dead = np.sum(self.aLvlNow[which_agents])
-
Ractuarial = 1.0 + wealth_dead/wealth_living
-
Ractuarial = 1.0 + old_div(wealth_dead,wealth_living) self.aNrmNow[who_lives] = self.aNrmNow[who_lives]*Ractuarial self.aLvlNow[who_lives] = self.aLvlNow[who_lives]*Ractuarial return which_agents
@@ -1308,7 +1313,7 @@
if name == 'main':
- import ConsumerParameters as Params
- from . import ConsumerParameters as Params from HARK.utilities import plotFuncs from time import clock from copy import copy @@ -1322,10 +1327,10 @@ urate_bad = 0.12 # Unemployment rate when economy is in bad state bust_prob = 0.01 # Probability of economy switching from good to bad recession_length = 20 # Averange length of bad state
- p_reemploy =1.0/unemp_length
- p_reemploy =old_div(1.0,unemp_length) p_unemploy_good = p_reemployurate_good/(1-urate_good) p_unemploy_bad = p_reemployurate_bad/(1-urate_bad)
- boom_prob = 1.0/recession_length
-
boom_prob = old_div(1.0,recession_length) MrkvArray = np.array([[(1-p_unemploy_good)(1-bust_prob),p_unemploy_good(1-bust_prob), (1-p_unemploy_good)bust_prob,p_unemploy_goodbust_prob], [p_reemploy*(1-bust_prob),(1-p_reemploy)*(1-bust_prob), @@ -1384,7 +1389,7 @@ ImmunityT = 6 # Number of periods of immunity
StateCount = ImmunityT+1 # Total number of Markov states
- IncomeDstnReg = [np.array([1-UnempPrb,UnempPrb]), np.array([1.0,1.0]), np.array([1.0/(1.0-UnempPrb),0.0])] # Ordinary income distribution
- IncomeDstnReg = [np.array([1-UnempPrb,UnempPrb]), np.array([1.0,1.0]), np.array([old_div(1.0,(1.0-UnempPrb)),0.0])] # Ordinary income distribution IncomeDstnImm = [np.array([1.0]), np.array([1.0]), np.array([1.0])] # Income distribution when unemployed IncomeDstn = [IncomeDstnReg] + ImmunityT*[IncomeDstnImm] # Income distribution for each Markov state, in a list
@@ -1426,7 +1431,7 @@ IncomeDstn = StateCount*[IncomeDstnReg] # Same simple income distribution in each state
# Make the state transition array for this type: Persistence probability of remaining in the same state, equiprobable otherwise
- MrkvArray = Persistencenp.eye(StateCount) + (1.0/StateCount)(1.0-Persistence)*np.ones((StateCount,StateCount))
-
MrkvArray = Persistencenp.eye(StateCount) + (old_div(1.0,StateCount))(1.0-Persistence)*np.ones((StateCount,StateCount))
init_serial_growth = copy(Params.init_idiosyncratic_shocks) init_serial_growth['MrkvArray'] = [MrkvArray]
--- ./HARK/ConsumptionSaving/TractableBufferStockModel.py (original) +++ ./HARK/ConsumptionSaving/TractableBufferStockModel.py (refactored) @@ -18,8 +18,13 @@ Despite the non-standard solution method, the iterative process can be embedded in the HARK framework, as shown below. ''' +from future import division +from future import print_function +from future import absolute_import
+from builtins import str +from past.utils import old_div import numpy as np
from HARK import AgentType, NullFunc, Solution @@ -124,11 +129,11 @@ Marginal propensity to consume this period. ''' uPP = lambda x : utilityPP(x,gam=CRRA)
- cNow = PermGroFacCmp*(DiscFacRfree)**(-1.0/CRRA)cNext(1 + UnempPrb((cNext/(PFMPC*(mNext-1.0)))CRRA-1.0))(-1.0/CRRA)
- mNow = (PermGroFacCmp/Rfree)*(mNext - 1.0) + cNow
- cNow = PermGroFacCmp*(DiscFacRfree)**(old_div(-1.0,CRRA))cNext(1 + UnempPrb((old_div(cNext,(PFMPC*(mNext-1.0))))CRRA-1.0))(old_div(-1.0,CRRA))
- mNow = (old_div(PermGroFacCmp,Rfree))(mNext - 1.0) + cNow cUNext = PFMPC(mNow-cNow)*Rnrm
- natural = BethRnrm(1.0/uPP(cNow))*((1.0-UnempPrb)*uPP(cNext)MPCnext + UnempPrbuPP(cUNext)*PFMPC)
- MPCnow = natural / (natural + 1)
- natural = BethRnrm(old_div(1.0,uPP(cNow)))*((1.0-UnempPrb)*uPP(cNext)MPCnext + UnempPrbuPP(cUNext)*PFMPC)
- MPCnow = old_div(natural, (natural + 1)) return mNow, cNow, MPCnow
@@ -263,24 +268,24 @@ uPPPP = lambda x : utilityPPPP(x,gam=self.CRRA)
# Define some useful constants from model primitives
-
self.PermGroFacCmp = self.PermGroFac/(1.0-self.UnempPrb) #"uncertainty compensated" wage growth factor
-
self.Rnrm = self.Rfree/self.PermGroFacCmp # net interest factor (Rfree normalized by wage growth)
-
self.PFMPC= 1.0-(self.Rfree**(-1.0))*(self.Rfree*self.DiscFac)**(1.0/self.CRRA) # MPC for a perfect forsight consumer
-
self.PermGroFacCmp = old_div(self.PermGroFac,(1.0-self.UnempPrb)) #"uncertainty compensated" wage growth factor
-
self.Rnrm = old_div(self.Rfree,self.PermGroFacCmp) # net interest factor (Rfree normalized by wage growth)
-
self.PFMPC= 1.0-(self.Rfree**(-1.0))*(self.Rfree*self.DiscFac)**(old_div(1.0,self.CRRA)) # MPC for a perfect forsight consumer self.Beth = self.Rnrm*self.DiscFac*self.PermGroFacCmp**(1.0-self.CRRA) # Verify that this consumer is impatient
-
PatFacGrowth = (self.Rfree*self.DiscFac)**(1.0/self.CRRA)/self.PermGroFacCmp
-
PatFacReturn = (self.Rfree*self.DiscFac)**(1.0/self.CRRA)/self.Rfree
-
PatFacGrowth = old_div((self.Rfree*self.DiscFac)**(old_div(1.0,self.CRRA)),self.PermGroFacCmp)
-
PatFacReturn = old_div((self.Rfree*self.DiscFac)**(old_div(1.0,self.CRRA)),self.Rfree) if PatFacReturn >= 1.0: raise Exception("Employed consumer not return impatient, cannot solve!") if PatFacGrowth >= 1.0: raise Exception("Employed consumer not growth impatient, cannot solve!") # Find target money and consumption
-
Pi = (1+(PatFacGrowth**(-self.CRRA)-1.0)/self.UnempPrb)**(1/self.CRRA)
-
self.h = (1.0/(1.0-self.PermGroFac/self.Rfree))
-
Pi = (1+old_div((PatFacGrowth**(-self.CRRA)-1.0),self.UnempPrb))**(old_div(1,self.CRRA))
-
self.h = (old_div(1.0,(1.0-old_div(self.PermGroFac,self.Rfree)))) zeta = self.Rnrm*self.PFMPC*Pi
-
self.mTarg = 1.0+(self.Rfree/(self.PermGroFacCmp+zeta*self.PermGroFacCmp-self.Rfree))
-
self.mTarg = 1.0+(old_div(self.Rfree,(self.PermGroFacCmp+zeta*self.PermGroFacCmp-self.Rfree))) self.cTarg = (1.0-self.Rnrm**(-1.0))*self.mTarg+self.Rnrm**(-1.0) mTargU = (self.mTarg - self.cTarg)*self.Rnrm cTargU = mTargU*self.PFMPC
@@ -295,15 +300,15 @@ self.MMMPCtarg = newton(mmmpcTargFixedPointFunc,0)
# Find the MPC at m=0
-
f_temp = lambda k : self.Beth*self.Rnrm*self.UnempPrb*(self.PFMPC*self.Rnrm*((1.0-k)/k))**(-self.CRRA-1.0)*self.PFMPC
-
mpcAtZeroFixedPointFunc = lambda k : k - f_temp(k)/(1 + f_temp(k))
-
f_temp = lambda k : self.Beth*self.Rnrm*self.UnempPrb*(self.PFMPC*self.Rnrm*(old_div((1.0-k),k)))**(-self.CRRA-1.0)*self.PFMPC
-
mpcAtZeroFixedPointFunc = lambda k : k - old_div(f_temp(k),(1 + f_temp(k))) #self.MPCmax = newton(mpcAtZeroFixedPointFunc,0.5) self.MPCmax = brentq(mpcAtZeroFixedPointFunc,self.PFMPC,0.99,xtol=0.00000001,rtol=0.00000001) # Make the initial list of Euler points: target and perturbation to either side mNrm_list = [self.mTarg-self.SSperturbance, self.mTarg, self.mTarg+self.SSperturbance]
-
c_perturb_lo = self.cTarg - self.SSperturbance*self.MPCtarg + 0.5*self.SSperturbance**2.0*self.MMPCtarg - (1.0/6.0)*self.SSperturbance**3.0*self.MMMPCtarg
-
c_perturb_hi = self.cTarg + self.SSperturbance*self.MPCtarg + 0.5*self.SSperturbance**2.0*self.MMPCtarg + (1.0/6.0)*self.SSperturbance**3.0*self.MMMPCtarg
-
c_perturb_lo = self.cTarg - self.SSperturbance*self.MPCtarg + 0.5*self.SSperturbance**2.0*self.MMPCtarg - (old_div(1.0,6.0))*self.SSperturbance**3.0*self.MMMPCtarg
-
c_perturb_hi = self.cTarg + self.SSperturbance*self.MPCtarg + 0.5*self.SSperturbance**2.0*self.MMPCtarg + (old_div(1.0,6.0))*self.SSperturbance**3.0*self.MMMPCtarg cNrm_list = [c_perturb_lo, self.cTarg, c_perturb_hi] MPC_perturb_lo = self.MPCtarg - self.SSperturbance*self.MMPCtarg + 0.5*self.SSperturbance**2.0*self.MMMPCtarg MPC_perturb_hi = self.MPCtarg + self.SSperturbance*self.MMPCtarg + 0.5*self.SSperturbance**2.0*self.MMMPCtarg
@@ -318,8 +323,8 @@ self.solution_terminal = solution_terminal
# Make two linear steady state functions
-
self.cSSfunc = lambda m : m*((self.Rnrm*self.PFMPC*Pi)/(1.0+self.Rnrm*self.PFMPC*Pi))
-
self.mSSfunc = lambda m : (self.PermGroFacCmp/self.Rfree)+(1.0-self.PermGroFacCmp/self.Rfree)*m
-
self.cSSfunc = lambda m : m*(old_div((self.Rnrm*self.PFMPC*Pi),(1.0+self.Rnrm*self.PFMPC*Pi)))
-
self.mSSfunc = lambda m : (old_div(self.PermGroFacCmp,self.Rfree))+(1.0-old_div(self.PermGroFacCmp,self.Rfree))*m
def postSolve(self): ''' @@ -466,7 +471,7 @@
import numpy as np # numeric Python from HARK.utilities import plotFuncs # basic plotting tools
- from ConsMarkovModel import MarkovConsumerType # An alternative, much longer way to solve the TBS model
-
from .ConsMarkovModel import MarkovConsumerType # An alternative, much longer way to solve the TBS model from time import clock # timing utility
do_simulation = True @@ -511,7 +516,7 @@ MrkvArray = np.array(1.0-base_primitives['UnempPrb'],base_primitives['UnempPrb',[0.0,1.0]]) # Define the two state, absorbing unemployment Markov array init_consumer_objects = {"CRRA":base_primitives['CRRA'], "Rfree":np.array(2*[base_primitives['Rfree']]), # Interest factor (same in both states)
-
"PermGroFac":[np.array(2*[base_primitives['PermGroFac']/(1.0-base_primitives['UnempPrb'])])], # Unemployment-compensated permanent growth factor
-
"PermGroFac":[np.array(2*[old_div(base_primitives['PermGroFac'],(1.0-base_primitives['UnempPrb']))])], # Unemployment-compensated permanent growth factor "BoroCnstArt":None, # Artificial borrowing constraint "PermShkStd":[0.0], # Permanent shock standard deviation "PermShkCount":1, # Number of shocks in discrete permanent shock distribution
--- ./HARK/ConsumptionSaving/ConsAggShockModel.py (original) +++ ./HARK/ConsumptionSaving/ConsAggShockModel.py (refactored) @@ -4,7 +4,13 @@ basic solver. Also includes a subclass of Market called CobbDouglas economy, used for solving "macroeconomic" models with aggregate shocks. '''
+from future import division
+from future import print_function
+from future import absolute_import
+
+from builtins import str
+from builtins import range
+from past.utils import old_div
import numpy as np
import scipy.stats as stats
from HARK.interpolation import LinearInterp, LinearInterpOnInterp1D, ConstantFunction, IdentityFunction,
@@ -13,7 +19,7 @@
CRRAutility_invP, CRRAutility_inv, combineIndepDstns,
approxMeanOneLognormal
from HARK.simulation import drawDiscrete, drawUniform
-from ConsIndShockModel import ConsumerSolution, IndShockConsumerType
+from .ConsIndShockModel import ConsumerSolution, IndShockConsumerType
from HARK import HARKobject, Market, AgentType
from copy import deepcopy
import matplotlib.pyplot as plt
@@ -99,7 +105,7 @@
'''
self.initializeSim()
self.aLvlNow = self.kInit*np.ones(self.AgentCount) # Start simulation near SS
-
self.aNrmNow = self.aLvlNow/self.pLvlNow
-
self.aNrmNow = old_div(self.aLvlNow,self.pLvlNow)
def updateSolutionTerminal(self): ''' @@ -230,7 +236,7 @@ who_lives = np.logical_not(who_dies) wealth_living = np.sum(self.aLvlNow[who_lives]) wealth_dead = np.sum(self.aLvlNow[who_dies])
-
Ractuarial = 1.0 + wealth_dead/wealth_living
-
Ractuarial = 1.0 + old_div(wealth_dead,wealth_living) self.aNrmNow[who_lives] = self.aNrmNow[who_lives]*Ractuarial self.aLvlNow[who_lives] = self.aLvlNow[who_lives]*Ractuarial return who_dies
@@ -586,10 +592,10 @@
# Calculate returns to capital and labor in the next period
AaggNow_tiled = np.tile(np.reshape(AFunc(Mgrid),(Mcount,1,1)),(1,aCount,ShkCount))
- kNext_array = AaggNow_tiled/(PermGroFacAgg*PermShkAggValsNext_tiled) # Next period's aggregate capital to labor ratio
- kNextEff_array = kNext_array/TranShkAggValsNext_tiled # Same thing, but account for transitory shock
- kNext_array = old_div(AaggNow_tiled,(PermGroFacAgg*PermShkAggValsNext_tiled)) # Next period's aggregate capital to labor ratio
- kNextEff_array = old_div(kNext_array,TranShkAggValsNext_tiled) # Same thing, but account for transitory shock R_array = Rfunc(kNextEff_array) # Interest factor on aggregate assets
- Reff_array = R_array/LivPrb # Effective interest factor on individual assets for survivors
-
Reff_array = old_div(R_array,LivPrb) # Effective interest factor on individual assets for survivors wEff_array = wFunc(kNextEff_array)TranShkAggValsNext_tiled # Effective wage rate (accounts for labor supply) PermShkTotal_array = PermGroFacPermGroFacAggPermShkValsNext_tiledPermShkAggValsNext_tiled # total / combined permanent shock Mnext_array = kNext_arrayR_array + wEff_array # next period's aggregate market resources @@ -614,7 +620,7 @@ EndOfPrdvP = DiscFacLivPrbnp.sum(vPnext_arrayShkPrbsNext_tiled,axis=2)
- cNrmNow = EndOfPrdvP**(-1.0/CRRA)
-
cNrmNow = EndOfPrdvP**(old_div(-1.0,CRRA)) mNrmNow = aNrmNow_tiled[:,:,0] + cNrmNow
@@ -757,10 +763,10 @@ AaggNow_tiled = np.tile(np.reshape(AaggGrid,(Mcount,1,1)),(1,aCount,ShkCount))
# Calculate returns to capital and labor in the next period
-
kNext_array = AaggNow_tiled/(PermGroFacAgg[j]*PermShkAggValsNext_tiled) # Next period's aggregate capital to labor ratio
-
kNextEff_array = kNext_array/TranShkAggValsNext_tiled # Same thing, but account for *transitory* shock
-
kNext_array = old_div(AaggNow_tiled,(PermGroFacAgg[j]*PermShkAggValsNext_tiled)) # Next period's aggregate capital to labor ratio
-
kNextEff_array = old_div(kNext_array,TranShkAggValsNext_tiled) # Same thing, but account for *transitory* shock R_array = Rfunc(kNextEff_array) # Interest factor on aggregate assets
-
Reff_array = R_array/LivPrb # Effective interest factor on individual assets *for survivors*
-
Reff_array = old_div(R_array,LivPrb) # Effective interest factor on individual assets *for survivors* wEff_array = wFunc(kNextEff_array)*TranShkAggValsNext_tiled # Effective wage rate (accounts for labor supply) PermShkTotal_array = PermGroFac*PermGroFacAgg[j]*PermShkValsNext_tiled*PermShkAggValsNext_tiled # total / combined permanent shock Mnext_array = kNext_array*R_array + wEff_array # next period's aggregate market resources
@@ -786,7 +792,7 @@
# Make the conditional end-of-period marginal value function
BoroCnstNat = LinearInterp(np.insert(AaggGrid,0,0.0),np.insert(BoroCnstNat_vec,0,0.0))
-
EndOfPrdvPnvrs = np.concatenate((np.zeros((Mcount,1)),EndOfPrdvP**(-1./CRRA)),axis=1)
-
EndOfPrdvPnvrs = np.concatenate((np.zeros((Mcount,1)),EndOfPrdvP**(old_div(-1.,CRRA))),axis=1) EndOfPrdvPnvrsFunc_base = BilinearInterp(np.transpose(EndOfPrdvPnvrs),np.insert(aXtraGrid,0,0.0),AaggGrid) EndOfPrdvPnvrsFunc = VariableLowerBoundFunc2D(EndOfPrdvPnvrsFunc_base,BoroCnstNat) EndOfPrdvPfunc_cond.append(MargValueFunc2D(EndOfPrdvPnvrsFunc,CRRA))
@@ -825,7 +831,7 @@ EndOfPrdvP += MrkvArray[i,j]*temp
# Calculate consumption and the endogenous mNrm gridpoints for this state
-
cNrmNow = EndOfPrdvP**(-1./CRRA)
-
cNrmNow = EndOfPrdvP**(old_div(-1.,CRRA)) mNrmNow = aNrmNow_tiled + cNrmNow # Loop through the values in Mgrid and make a piecewise linear consumption function for each
@@ -934,12 +940,12 @@ ------- None '''
-
self.kSS = ((self.getPermGroFacAggLR()**(self.CRRA)/self.DiscFac - (1.0-self.DeprFac))/self.CapShare)**(1.0/(self.CapShare-1.0))
-
self.kSS = (old_div((old_div(self.getPermGroFacAggLR()**(self.CRRA),self.DiscFac) - (1.0-self.DeprFac)),self.CapShare))**(old_div(1.0,(self.CapShare-1.0))) self.KtoYSS = self.kSS**(1.0-self.CapShare) self.wRteSS = (1.0-self.CapShare)*self.kSS**(self.CapShare) self.RfreeSS = (1.0 + self.CapShare*self.kSS**(self.CapShare-1.0) - self.DeprFac) self.MSS = self.kSS*self.RfreeSS + self.wRteSS
-
self.convertKtoY = lambda KtoY : KtoY**(1.0/(1.0 - self.CapShare)) # converts K/Y to K/L
-
self.convertKtoY = lambda KtoY : KtoY**(old_div(1.0,(1.0 - self.CapShare))) # converts K/Y to K/L self.Rfunc = lambda k : (1.0 + self.CapShare*k**(self.CapShare-1.0) - self.DeprFac) self.wFunc = lambda k : ((1.0-self.CapShare)*k**(self.CapShare)) self.KtoLnow_init = self.kSS
@@ -1047,7 +1053,7 @@ aggregate permanent and transitory shocks. ''' # Calculate aggregate savings
-
AaggPrev = np.mean(np.array(aLvlNow))/np.mean(pLvlNow) # End-of-period savings from last period
-
AaggPrev = old_div(np.mean(np.array(aLvlNow)),np.mean(pLvlNow)) # End-of-period savings from last period # Calculate aggregate capital this period AggregateK = np.mean(np.array(aLvlNow)) # ...becomes capital today # This version uses end-of-period assets and
@@ -1063,10 +1069,10 @@ AggregateL = np.mean(pLvlNow)*PermShkAggNow
# Calculate the interest factor and wage rate this period
-
KtoLnow = AggregateK/AggregateL
-
KtoLnow = old_div(AggregateK,AggregateL) self.KtoYnow = KtoLnow**(1.0-self.CapShare)
-
RfreeNow = self.Rfunc(KtoLnow/TranShkAggNow)
-
wRteNow = self.wFunc(KtoLnow/TranShkAggNow)
-
RfreeNow = self.Rfunc(old_div(KtoLnow,TranShkAggNow))
-
wRteNow = self.wFunc(old_div(KtoLnow,TranShkAggNow)) MaggNow = KtoLnow*RfreeNow + wRteNow*TranShkAggNow self.KtoLnow = KtoLnow # Need to store this as it is a sow variable
@@ -1278,13 +1284,13 @@ self.Shk_idx += 1
# Factor prices are constant
-
RfreeNow = self.Rfunc(1.0/PermShkAggNow)
-
wRteNow = self.wFunc(1.0/PermShkAggNow)
-
RfreeNow = self.Rfunc(old_div(1.0,PermShkAggNow))
-
wRteNow = self.wFunc(old_div(1.0,PermShkAggNow)) # Aggregates are irrelavent AaggNow = 1.0 MaggNow = 1.0
-
KtoLnow = 1.0/PermShkAggNow
-
KtoLnow = old_div(1.0,PermShkAggNow) # Package the results into an object and return it AggVarsNow = CobbDouglasAggVars(MaggNow,AaggNow,KtoLnow,RfreeNow,wRteNow,PermShkAggNow,TranShkAggNow)
@@ -1367,7 +1373,7 @@ w, v = np.linalg.eig(np.transpose(self.MrkvArray)) idx = (np.abs(w-1.0)).argmin() x = v[:,idx].astype(float)
-
LR_dstn = (x/np.sum(x))
-
LR_dstn = (old_div(x,np.sum(x))) # Return the weighted average of aggregate permanent income growth factors PermGroFacAggLR = np.dot(LR_dstn,np.array(self.PermGroFacAgg))
@@ -1483,7 +1489,7 @@ w, v = np.linalg.eig(np.transpose(self.MrkvArray)) idx = (np.abs(w-1.0)).argmin() x = v[:,idx].astype(float)
-
LR_dstn = (x/np.sum(x))
-
LR_dstn = (old_div(x,np.sum(x))) # Initialize the Markov history and set up transitions MrkvNow_hist = np.zeros(self.act_T_orig,dtype=int)
@@ -1517,12 +1523,12 @@ never_visited = np.where(np.array(state_T == 0))[0] MrkvNow = np.random.choice(never_visited) else: # Otherwise, use logit choice probabilities to visit an underrepresented state
-
emp_dstn = state_T/act_T
-
ratios = LR_dstn/emp_dstn
-
emp_dstn = old_div(state_T,act_T)
-
ratios = old_div(LR_dstn,emp_dstn) ratios_adj = ratios - np.max(ratios)
-
ratios_exp = np.exp(ratios_adj/logit_scale)
-
ratios_exp = np.exp(old_div(ratios_adj,logit_scale)) ratios_sum = np.sum(ratios_exp)
-
jump_probs = ratios_exp/ratios_sum
-
jump_probs = old_div(ratios_exp,ratios_sum) cum_probs = np.cumsum(jump_probs) MrkvNow = np.searchsorted(cum_probs,draws[-1])
@@ -1750,7 +1756,7 @@ ###############################################################################
def demo():
- import ConsumerParameters as Params
- from . import ConsumerParameters as Params from time import clock from HARK.utilities import plotFuncs mystr = lambda number : "{:.4f}".format(number) @@ -1864,8 +1870,8 @@ # Make a Krusell-Smith agent type # NOTE: These agents aren't exactly like KS, as they don't have serially correlated unemployment KSexampleType = deepcopy(AggShockMrkvExample)
-
KSexampleType.IncomeDstn[0] = [[np.array([0.96,0.04]),np.array([1.0,1.0]),np.array([1.0/0.96,0.0])],
-
[np.array([0.90,0.10]),np.array([1.0,1.0]),np.array([1.0/0.90,0.0])]]
-
KSexampleType.IncomeDstn[0] = [[np.array([0.96,0.04]),np.array([1.0,1.0]),np.array([old_div(1.0,0.96),0.0])],
-
[np.array([0.90,0.10]),np.array([1.0,1.0]),np.array([old_div(1.0,0.90),0.0])]] # Make a KS economy KSeconomy = deepcopy(MrkvEconomyExample)
--- ./HARK/ConsumptionSaving/ConsIndShockModel.py (original) +++ ./HARK/ConsumptionSaving/ConsIndShockModel.py (refactored) @@ -12,7 +12,14 @@ See NARK for information on variable naming conventions. See HARK documentation for mathematical descriptions of the models being solved. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import str +from builtins import range +from builtins import object +from past.utils import old_div from copy import copy, deepcopy import numpy as np from scipy.optimize import newton @@ -403,7 +410,7 @@ ------- none '''
-
MPCnvrs = self.MPC**(-self.CRRA/(1.0-self.CRRA))
-
MPCnvrs = self.MPC**(old_div(-self.CRRA,(1.0-self.CRRA))) vFuncNvrs = LinearInterp(np.array([self.mNrmMin, self.mNrmMin+1.0]),np.array([0.0, MPCnvrs])) self.vFunc = ValueFunc(vFuncNvrs,self.CRRA) self.vPfunc = MargValueFunc(self.cFunc,self.CRRA)
@@ -421,11 +428,11 @@ none ''' # Calculate human wealth this period (and lower bound of m)
-
self.hNrmNow = (self.PermGroFac/self.Rfree)*(self.solution_next.hNrm + 1.0)
-
self.hNrmNow = (old_div(self.PermGroFac,self.Rfree))*(self.solution_next.hNrm + 1.0) self.mNrmMin = -self.hNrmNow # Calculate the (constant) marginal propensity to consume
-
PatFac = ((self.Rfree*self.DiscFacEff)**(1.0/self.CRRA))/self.Rfree
-
self.MPC = 1.0/(1.0 + PatFac/self.solution_next.MPCmin)
-
PatFac = old_div(((self.Rfree*self.DiscFacEff)**(old_div(1.0,self.CRRA))),self.Rfree)
-
self.MPC = old_div(1.0,(1.0 + old_div(PatFac,self.solution_next.MPCmin))) # Construct the consumption function self.cFunc = LinearInterp([self.mNrmMin, self.mNrmMin+1.0],[0.0, self.MPC]) # Add two attributes to enable calculation of steady state market resources
@@ -449,7 +456,7 @@ Same solution that was passed, but now with the attribute mNrmSS. ''' # Make a linear function of all combinations of c and m that yield mNext = mNow
-
mZeroChangeFunc = lambda m : (1.0-self.PermGroFac/self.Rfree)*m + (self.PermGroFac/self.Rfree)*self.ExIncNext
-
mZeroChangeFunc = lambda m : (1.0-old_div(self.PermGroFac,self.Rfree))*m + (old_div(self.PermGroFac,self.Rfree))*self.ExIncNext # Find the steady state level of market resources searchSSfunc = lambda m : solution.cFunc(m) - mZeroChangeFunc(m) # A zero of this is SS market resources
@@ -694,12 +701,12 @@ self.vFuncNext = solution_next.vFunc
# Update the bounding MPCs and PDV of human wealth:
-
self.PatFac = ((self.Rfree*self.DiscFacEff)**(1.0/self.CRRA))/self.Rfree
-
self.MPCminNow = 1.0/(1.0 + self.PatFac/solution_next.MPCmin)
-
self.PatFac = old_div(((self.Rfree*self.DiscFacEff)**(old_div(1.0,self.CRRA))),self.Rfree)
-
self.MPCminNow = old_div(1.0,(1.0 + old_div(self.PatFac,solution_next.MPCmin))) self.ExIncNext = np.dot(self.ShkPrbsNext,self.TranShkValsNext*self.PermShkValsNext) self.hNrmNow = self.PermGroFac/self.Rfree*(self.ExIncNext + solution_next.hNrm)
-
self.MPCmaxNow = 1.0/(1.0 + (self.WorstIncPrb**(1.0/self.CRRA))*
-
self.PatFac/solution_next.MPCmax)
-
self.MPCmaxNow = old_div(1.0,(1.0 + (self.WorstIncPrb**(old_div(1.0,self.CRRA)))*
-
self.PatFac/solution_next.MPCmax))
def defBoroCnst(self,BoroCnstArt): @@ -1005,8 +1012,8 @@ EndOfPrdvPP = self.DiscFacEffself.Rfreeself.Rfreeself.PermGroFac**(-self.CRRA-1.0)
np.sum(self.PermShkVals_temp**(-self.CRRA-1.0)* self.vPPfuncNext(self.mNrmNext)*self.ShkPrbs_temp,axis=0)
-
dcda = EndOfPrdvPP/self.uPP(np.array(cNrm[1:]))
-
MPC = dcda/(dcda+1.)
-
dcda = old_div(EndOfPrdvPP,self.uPP(np.array(cNrm[1:])))
-
MPC = old_div(dcda,(dcda+1.)) MPC = np.insert(MPC,0,self.MPCmaxNow) cFuncNowUnc = CubicInterp(mNrm,cNrm,MPC,self.MPCminNow*self.hNrmNow,self.MPCminNow)
@@ -1093,8 +1100,8 @@ vNvrsP = vPnow*self.uinvP(vNrmNow) mNrm_temp = np.insert(mNrm_temp,0,self.mNrmMinNow) vNvrs = np.insert(vNvrs,0,0.0)
-
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff**(-self.CRRA/(1.0-self.CRRA)))
-
MPCminNvrs = self.MPCminNow**(-self.CRRA/(1.0-self.CRRA))
-
vNvrsP = np.insert(vNvrsP,0,self.MPCmaxEff**(old_div(-self.CRRA,(1.0-self.CRRA))))
-
MPCminNvrs = self.MPCminNow**(old_div(-self.CRRA,(1.0-self.CRRA))) vNvrsFuncNow = CubicInterp(mNrm_temp,vNvrs,vNvrsP,MPCminNvrs*self.hNrmNow,MPCminNvrs) vFuncNow = ValueFunc(vNvrsFuncNow,self.CRRA) return vFuncNow
@@ -1348,8 +1355,8 @@ # Recalculate the minimum MPC and human wealth using the interest factor on saving. # This overwrites values from setAndUpdateValues, which were based on Rboro instead. if KinkBool:
-
PatFacTop = ((self.Rsave*self.DiscFacEff)**(1.0/self.CRRA))/self.Rsave
-
self.MPCminNow = 1.0/(1.0 + PatFacTop/self.solution_next.MPCmin)
-
PatFacTop = old_div(((self.Rsave*self.DiscFacEff)**(old_div(1.0,self.CRRA))),self.Rsave)
-
self.MPCminNow = old_div(1.0,(1.0 + old_div(PatFacTop,self.solution_next.MPCmin))) self.hNrmNow = self.PermGroFac/self.Rsave*(np.dot(self.ShkPrbsNext, self.TranShkValsNext*self.PermShkValsNext) + self.solution_next.hNrm)
@@ -1623,7 +1630,7 @@ # Calculate new states: normalized market resources and permanent income level self.pLvlNow = pLvlPrevself.PermShkNow # Updated permanent income level self.PlvlAggNow = self.PlvlAggNowself.PermShkAggNow # Updated aggregate permanent productivity level
-
ReffNow = RfreeNow/self.PermShkNow # "Effective" interest factor on normalized assets
-
ReffNow = old_div(RfreeNow,self.PermShkNow) # "Effective" interest factor on normalized assets self.bNrmNow = ReffNow*aNrmPrev # Bank balances before labor income self.mNrmNow = self.bNrmNow + self.TranShkNow # Market resources after income return None
@@ -1691,31 +1698,31 @@ return
#Evaluate and report on the return impatience condition
-
RIC=(self.LivPrb[0]*(self.Rfree*self.DiscFac)**(1/self.CRRA))/self.Rfree
-
RIC=old_div((self.LivPrb[0]*(self.Rfree*self.DiscFac)**(old_div(1,self.CRRA))),self.Rfree) if RIC<1:
-
print 'The return impatiance factor value for the supplied parameter values satisfies the return impatiance condition.'
-
print('The return impatiance factor value for the supplied parameter values satisfies the return impatiance condition.') else:
-
print 'The given type violates the return impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.'
-
print('The given type violates the return impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.') if verbose:
-
print 'The return impatiance factor value for the supplied parameter values is ' + str(RIC)
-
print('The return impatiance factor value for the supplied parameter values is ' + str(RIC)) #Evaluate and report on the absolute impatience condition
-
AIC=self.LivPrb[0]*(self.Rfree*self.DiscFac)**(1/self.CRRA)
-
AIC=self.LivPrb[0]*(self.Rfree*self.DiscFac)**(old_div(1,self.CRRA)) if AIC<1:
-
print 'The absolute impatiance factor value for the supplied parameter values satisfies the absolute impatiance condition.'
-
print('The absolute impatiance factor value for the supplied parameter values satisfies the absolute impatiance condition.') else:
-
print 'The given type violates the absolute impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.'
-
print('The given type violates the absolute impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.') if verbose:
-
print 'The absolute impatiance factor value for the supplied parameter values is ' + str(AIC)
-
print('The absolute impatiance factor value for the supplied parameter values is ' + str(AIC)) #Evaluate and report on the finite human wealth condition
-
FHWC=self.PermGroFac[0]/self.Rfree
-
FHWC=old_div(self.PermGroFac[0],self.Rfree) if FHWC<1:
-
print 'The finite human wealth factor value for the supplied parameter values satisfies the finite human wealth condition.'
-
print('The finite human wealth factor value for the supplied parameter values satisfies the finite human wealth condition.') else:
-
print 'The given type violates the finite human wealth condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.'
-
print('The given type violates the finite human wealth condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.') if verbose:
-
print 'The finite human wealth factor value for the supplied parameter values is ' + str(FHWC)
-
print('The finite human wealth factor value for the supplied parameter values is ' + str(FHWC))
class IndShockConsumerType(PerfForesightConsumerType): @@ -1888,15 +1895,15 @@ WorstIncPrb = np.sum(ShkPrbsNext[(PermShkValsNext*TranShkValsNext)==WorstIncNext])
# Calculate human wealth and the infinite horizon natural borrowing constraint
-
hNrm = (ExIncNext*self.PermGroFac[0]/self.Rfree)/(1.0-self.PermGroFac[0]/self.Rfree)
-
hNrm = old_div((ExIncNext*self.PermGroFac[0]/self.Rfree),(1.0-old_div(self.PermGroFac[0],self.Rfree))) temp = self.PermGroFac[0]*PermShkMinNext/self.Rfree BoroCnstNat = -TranShkMinNext*temp/(1.0-temp)
-
PatFac = (self.DiscFac*self.LivPrb[0]*self.Rfree)**(1.0/self.CRRA)/self.Rfree
-
PatFac = old_div((self.DiscFac*self.LivPrb[0]*self.Rfree)**(old_div(1.0,self.CRRA)),self.Rfree) if BoroCnstNat < self.BoroCnstArt: MPCmax = 1.0 # if natural borrowing constraint is overridden by artificial one, MPCmax is 1 else:
-
MPCmax = 1.0 - WorstIncPrb**(1.0/self.CRRA)*PatFac
-
MPCmax = 1.0 - WorstIncPrb**(old_div(1.0,self.CRRA))*PatFac MPCmin = 1.0 - PatFac # Store the results as attributes of self
@@ -1969,10 +1976,10 @@
# Calculate expected marginal value and implied optimal consumption
ExvPnextGrid = self.DiscFacself.Rfreeself.LivPrb[0]self.PermGroFac[0]**(-self.CRRA)
np.sum(PermShkVals_tiled**(-self.CRRA)vPnextArrayShkPrbs_tiled,axis=0)
-
cOptGrid = ExvPnextGrid**(-1.0/self.CRRA)
-
cOptGrid = ExvPnextGrid**(old_div(-1.0,self.CRRA)) # Calculate Euler error and store an interpolated function
-
EulerErrorNrmGrid = (cNowGrid - cOptGrid)/cOptGrid
-
EulerErrorNrmGrid = old_div((cNowGrid - cOptGrid),cOptGrid) eulerErrorFunc = LinearInterp(mNowGrid,EulerErrorNrmGrid) self.eulerErrorFunc = eulerErrorFunc
@@ -2012,38 +2019,38 @@
#Get expected psi inverse
for i in range(len(self.PermShkDstn[1])):
-
exp_psi_inv=exp_psi_inv+(1.0/self.PermShkCount)*(self.PermShkDstn[1][i])**(-1)
-
exp_psi_inv=exp_psi_inv+(old_div(1.0,self.PermShkCount))*(self.PermShkDstn[1][i])**(-1) #Get expected psi to the power one minus CRRA for i in range(len(self.PermShkDstn[1])):
-
exp_psi_to_one_minus_rho=exp_psi_to_one_minus_rho+(1.0/self.PermShkCount)*(self.PermShkDstn[1][i])**(1-self.CRRA)
-
exp_psi_to_one_minus_rho=exp_psi_to_one_minus_rho+(old_div(1.0,self.PermShkCount))*(self.PermShkDstn[1][i])**(1-self.CRRA) #Evaluate and report on the growth impatience condition
-
GIC=(self.LivPrb[0]*exp_psi_inv*(self.Rfree*self.DiscFac)**(1/self.CRRA))/self.PermGroFac[0]
-
GIC=old_div((self.LivPrb[0]*exp_psi_inv*(self.Rfree*self.DiscFac)**(old_div(1,self.CRRA))),self.PermGroFac[0]) if GIC<1:
-
print 'The growth impatiance factor value for the supplied parameter values satisfies the growth impatiance condition.'
-
print('The growth impatiance factor value for the supplied parameter values satisfies the growth impatiance condition.') else:
-
print 'The given type violates the growth impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.'
-
print('The given type violates the growth impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.') if verbose:
-
print 'The growth impatiance factor value for the supplied parameter values is ' + str(GIC)
-
print('The growth impatiance factor value for the supplied parameter values is ' + str(GIC)) #Evaluate and report on the weak return impatience condition
-
WRIC=(self.LivPrb[0]*(self.UnempPrb**(1/self.CRRA))*(self.Rfree*self.DiscFac)**(1/self.CRRA))/self.Rfree
-
WRIC=old_div((self.LivPrb[0]*(self.UnempPrb**(old_div(1,self.CRRA)))*(self.Rfree*self.DiscFac)**(old_div(1,self.CRRA))),self.Rfree) if WRIC<1:
-
print 'The weak return impatiance factor value for the supplied parameter values satisfies the weak return impatiance condition.'
-
print('The weak return impatiance factor value for the supplied parameter values satisfies the weak return impatiance condition.') else:
-
print 'The given type violates the weak return impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.'
-
print('The given type violates the weak return impatience condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.') if verbose:
-
print 'The weak return impatiance factor value for the supplied parameter values is ' + str(WRIC)
-
print('The weak return impatiance factor value for the supplied parameter values is ' + str(WRIC)) #Evaluate and report on the finite value of autarky condition FVAC=self.LivPrb[0]*self.DiscFac*exp_psi_to_one_minus_rho*(self.PermGroFac[0]**(1-self.CRRA)) if FVAC<1:
-
print 'The finite value of autarky factor value for the supplied parameter values satisfies the finite value of autarky condition.'
-
print('The finite value of autarky factor value for the supplied parameter values satisfies the finite value of autarky condition.') else:
-
print 'The given type violates the finite value of autarky condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.'
-
print('The given type violates the finite value of autarky condition with the supplied parameter values. Therefore, a nondegenerate solution may not be available. See Table 3 in "Theoretical Foundations of Buffer Stock Saving" (Carroll, 2011) to check which conditions are sufficient for a nondegenerate solution.') if verbose:
-
print 'The finite value of autarky factor value for the supplied parameter values is ' + str(FVAC)
-
print('The finite value of autarky factor value for the supplied parameter values is ' + str(FVAC))
class KinkedRconsumerType(IndShockConsumerType): ''' @@ -2109,16 +2116,16 @@ WorstIncPrb = np.sum(ShkPrbsNext[(PermShkValsNext*TranShkValsNext)==WorstIncNext])
# Calculate human wealth and the infinite horizon natural borrowing constraint
-
hNrm = (ExIncNext*self.PermGroFac[0]/self.Rsave)/(1.0-self.PermGroFac[0]/self.Rsave)
-
hNrm = old_div((ExIncNext*self.PermGroFac[0]/self.Rsave),(1.0-old_div(self.PermGroFac[0],self.Rsave))) temp = self.PermGroFac[0]*PermShkMinNext/self.Rboro BoroCnstNat = -TranShkMinNext*temp/(1.0-temp)
-
PatFacTop = (self.DiscFac*self.LivPrb[0]*self.Rsave)**(1.0/self.CRRA)/self.Rsave
-
PatFacBot = (self.DiscFac*self.LivPrb[0]*self.Rboro)**(1.0/self.CRRA)/self.Rboro
-
PatFacTop = old_div((self.DiscFac*self.LivPrb[0]*self.Rsave)**(old_div(1.0,self.CRRA)),self.Rsave)
-
PatFacBot = old_div((self.DiscFac*self.LivPrb[0]*self.Rboro)**(old_div(1.0,self.CRRA)),self.Rboro) if BoroCnstNat < self.BoroCnstArt: MPCmax = 1.0 # if natural borrowing constraint is overridden by artificial one, MPCmax is 1 else:
-
MPCmax = 1.0 - WorstIncPrb**(1.0/self.CRRA)*PatFacBot
-
MPCmax = 1.0 - WorstIncPrb**(old_div(1.0,self.CRRA))*PatFacBot MPCmin = 1.0 - PatFacTop # Store the results as attributes of self
@@ -2277,7 +2284,7 @@ if UnempPrbRet > 0: PermShkValsRet = np.array([1.0, 1.0]) # Permanent income is deterministic in retirement (2 states for temp income shocks) TranShkValsRet = np.array([IncUnempRet,
-
(1.0-UnempPrbRet*IncUnempRet)/(1.0-UnempPrbRet)])
-
old_div((1.0-UnempPrbRet*IncUnempRet),(1.0-UnempPrbRet))]) ShkPrbsRet = np.array([UnempPrbRet, 1.0-UnempPrbRet]) else: PermShkValsRet = np.array([1.0])
@@ -2382,8 +2389,8 @@ elif grid_type == "exp_mult": aXtraGrid = makeGridExpMult(ming=aXtraMin, maxg=aXtraMax, ng=aXtraCount, timestonest=exp_nest) else:
-
raise Exception, "grid_type not recognized in __init__." + \
-
"Please ensure grid_type is 'linear' or 'exp_mult'"
-
raise Exception("grid_type not recognized in __init__." + \
-
"Please ensure grid_type is 'linear' or 'exp_mult'")
for a in aXtraExtra: @@ -2397,7 +2404,7 @@ ####################################################################################################
if name == 'main':
- import ConsumerParameters as Params
- from . import ConsumerParameters as Params from HARK.utilities import plotFuncsDer, plotFuncs from time import clock mystr = lambda number : "{:.4f}".format(number)
--- ./HARK/parallel.py (original) +++ ./HARK/parallel.py (refactored) @@ -3,6 +3,12 @@ and joblib. Packages can be installed by typing "conda install dill" (etc) at a command prompt. ''' +from future import division +from future import print_function +from builtins import zip +from builtins import str +from builtins import range +from past.utils import old_div import multiprocessing import numpy as np from time import clock @@ -18,9 +24,9 @@ # such that they will raise useful errors if called. def raiseImportError(moduleStr): def defineImportError(*args, **kwargs):
-
raise ImportError,moduleStr + ' could not be imported, and is required for this'+\
-
raise ImportError(moduleStr + ' could not be imported, and is required for this'+\ ' function. See HARK documentation for more information on how to install the ' \
-
+ moduleStr + ' module.'
-
+ moduleStr + ' module.') return defineImportError
Parallel = raiseImportError('joblib') @@ -227,7 +233,7 @@ print('Resuming search after ' + str(iters) + ' iterations and ' + str(evals) + ' function evaluations.')
- j_list = range(N-P,N)
-
j_list = list(range(N-P,N)) opt_params= [r_param,c_param,e_param]
@@ -360,15 +366,15 @@ ''' f = open(name + '.txt','rb') my_reader = csv.reader(f,delimiter=' ')
- my_shape_txt = my_reader.next()
- my_shape_txt = next(my_reader) shape0 = int(my_shape_txt[0]) shape1 = int(my_shape_txt[1])
- my_nums_txt = my_reader.next()
- my_nums_txt = next(my_reader) iters = int(my_nums_txt[0]) evals = int(my_nums_txt[1])
- simplex_flat = np.array(my_reader.next(),dtype=float)
- simplex_flat = np.array(next(my_reader),dtype=float) simplex = np.reshape(simplex_flat,(shape0,shape1))
- fvals = np.array(my_reader.next(),dtype=float)
-
fvals = np.array(next(my_reader),dtype=float) f.close()
return simplex, fvals, iters, evals @@ -475,7 +481,7 @@ P = 24 my_guess = np.random.rand(K) - 0.5 def testFunc1(x):
-
return np.sum(x**2.0)/x.size
-
return old_div(np.sum(x**2.0),x.size)
xopt, fmin = parallelNelderMead(testFunc1,my_guess,P=P,maxiter=300,savefreq=100,name='testfile',resume=False) xopt2, fmin2 = parallelNelderMead(testFunc1,xopt,P=P)
--- ./HARK/interpolation.py (original) +++ ./HARK/interpolation.py (refactored) @@ -6,9 +6,14 @@ convergence. The interpolator classes currently in this module inherit their distance method from HARKobject. '''
+from future import division +from future import print_function +from future import absolute_import + +from builtins import range +from past.utils import old_div import numpy as np -from core import HARKobject +from .core import HARKobject from copy import deepcopy
def _isscalar(x): @@ -734,12 +739,12 @@
# Make a decay extrapolation
if intercept_limit is not None and slope_limit is not None:
-
slope_at_top = (y_list[-1] - y_list[-2])/(x_list[-1] - x_list[-2])
-
slope_at_top = old_div((y_list[-1] - y_list[-2]),(x_list[-1] - x_list[-2])) level_diff = intercept_limit + slope_limit*x_list[-1] - y_list[-1] slope_diff = slope_limit - slope_at_top self.decay_extrap_A = level_diff
-
self.decay_extrap_B = -slope_diff/level_diff
-
self.decay_extrap_B = old_div(-slope_diff,level_diff) self.intercept_limit = intercept_limit self.slope_limit = slope_limit self.decay_extrap = True
@@ -769,12 +774,12 @@
i = np.maximum(np.searchsorted(self.x_list[:-1],x),1)
-
alpha = (x-self.x_list[i-1])/(self.x_list[i]-self.x_list[i-1])
-
alpha = old_div((x-self.x_list[i-1]),(self.x_list[i]-self.x_list[i-1])) if _eval: y = (1.-alpha)*self.y_list[i-1] + alpha*self.y_list[i] if _Der:
-
dydx = (self.y_list[i] - self.y_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
dydx = old_div((self.y_list[i] - self.y_list[i-1]),(self.x_list[i] - self.x_list[i-1])) if not self.lower_extrap: below_lower_bound = x < self.x_list[0]
@@ -880,7 +885,7 @@ self.coeffs = np.nan,np.nan,np.nan,np.nan
# Calculate interpolation coefficients on segments mapped to [0,1]
-
for i in xrange(self.n-1):
-
for i in range(self.n-1): x0 = x_list[i] y0 = y_list[i] x1 = x_list[i+1]
@@ -899,7 +904,7 @@ gap = slope_limit*x1 + intercept_limit - y1 slope = slope_limit - dydx_list[self.n-1] if (gap != 0) and (slope <= 0):
-
temp = [intercept_limit, slope_limit, gap, slope/gap]
-
temp = [intercept_limit, slope_limit, gap, old_div(slope,gap)] elif slope > 0: temp = [intercept_limit, slope_limit, 0, 0] # fixing a problem when slope is positive else:
@@ -917,7 +922,7 @@ if pos == 0: y = self.coeffs[0,0] + self.coeffs[0,1]*(x - self.x_list[0]) elif (pos < self.n):
-
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
-
alpha = old_div((x - self.x_list[pos-1]),(self.x_list[pos] - self.x_list[pos-1])) y = self.coeffs[pos,0] + alpha*(self.coeffs[pos,1] + alpha*(self.coeffs[pos,2] + alpha*self.coeffs[pos,3])) else: alpha = x - self.x_list[self.n-1]
@@ -934,7 +939,7 @@ # Do the "in bounds" evaluation points i = pos[in_bnds] coeffs_in = self.coeffs[i,:]
-
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
alpha = old_div((x[in_bnds] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1])) y[in_bnds] = coeffs_in[:,0] + alpha*(coeffs_in[:,1] + alpha*(coeffs_in[:,2] + alpha*coeffs_in[:,3])) # Do the "out of bounds" evaluation points
@@ -953,8 +958,8 @@ if pos == 0: dydx = self.coeffs[0,1] elif (pos < self.n):
-
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
-
dydx = (self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3]))/(self.x_list[pos] - self.x_list[pos-1])
-
alpha = old_div((x - self.x_list[pos-1]),(self.x_list[pos] - self.x_list[pos-1]))
-
dydx = old_div((self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3])),(self.x_list[pos] - self.x_list[pos-1])) else: alpha = x - self.x_list[self.n-1] dydx = self.coeffs[pos,1] - self.coeffs[pos,2]*self.coeffs[pos,3]*np.exp(alpha*self.coeffs[pos,3])
@@ -970,8 +975,8 @@ # Do the "in bounds" evaluation points i = pos[in_bnds] coeffs_in = self.coeffs[i,:]
-
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
dydx[in_bnds] = (coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3]))/(self.x_list[i] - self.x_list[i-1])
-
alpha = old_div((x[in_bnds] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1]))
-
dydx[in_bnds] = old_div((coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3])),(self.x_list[i] - self.x_list[i-1])) # Do the "out of bounds" evaluation points dydx[out_bot] = self.coeffs[0,1]
@@ -991,9 +996,9 @@ y = self.coeffs[0,0] + self.coeffs[0,1]*(x - self.x_list[0]) dydx = self.coeffs[0,1] elif (pos < self.n):
-
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
-
alpha = old_div((x - self.x_list[pos-1]),(self.x_list[pos] - self.x_list[pos-1])) y = self.coeffs[pos,0] + alpha*(self.coeffs[pos,1] + alpha*(self.coeffs[pos,2] + alpha*self.coeffs[pos,3]))
-
dydx = (self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3]))/(self.x_list[pos] - self.x_list[pos-1])
-
dydx = old_div((self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3])),(self.x_list[pos] - self.x_list[pos-1])) else: alpha = x - self.x_list[self.n-1] y = self.coeffs[pos,0] + x*self.coeffs[pos,1] - self.coeffs[pos,2]*np.exp(alpha*self.coeffs[pos,3])
@@ -1011,9 +1016,9 @@ # Do the "in bounds" evaluation points i = pos[in_bnds] coeffs_in = self.coeffs[i,:]
-
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
alpha = old_div((x[in_bnds] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1])) y[in_bnds] = coeffs_in[:,0] + alpha*(coeffs_in[:,1] + alpha*(coeffs_in[:,2] + alpha*coeffs_in[:,3]))
-
dydx[in_bnds] = (coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3]))/(self.x_list[i] - self.x_list[i-1])
-
dydx[in_bnds] = old_div((coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3])),(self.x_list[i] - self.x_list[i-1])) # Do the "out of bounds" evaluation points y[out_bot] = self.coeffs[0,0] + self.coeffs[0,1]*(x[out_bot] - self.x_list[0])
@@ -1081,8 +1086,8 @@ y_pos = self.ySearchFunc(self.y_list,y) y_pos[y_pos < 1] = 1 y_pos[y_pos > self.y_n-1] = self.y_n-1
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1])) f = ( (1-alpha)*(1-beta)*self.f_values[x_pos-1,y_pos-1] + (1-alpha)*beta*self.f_values[x_pos-1,y_pos]
@@ -1105,12 +1110,12 @@ y_pos = self.ySearchFunc(self.y_list,y) y_pos[y_pos < 1] = 1 y_pos[y_pos > self.y_n-1] = self.y_n-1
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
dfdx = (
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
dfdx = old_div(( ((1-beta)*self.f_values[x_pos,y_pos-1] + beta*self.f_values[x_pos,y_pos]) - ((1-beta)*self.f_values[x_pos-1,y_pos-1]
-
+ beta*self.f_values[x_pos-1,y_pos]))/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
+ beta*self.f_values[x_pos-1,y_pos])),(self.x_list[x_pos] - self.x_list[x_pos-1])) return dfdx
def _derY(self,x,y): @@ -1128,12 +1133,12 @@ y_pos = self.ySearchFunc(self.y_list,y) y_pos[y_pos < 1] = 1 y_pos[y_pos > self.y_n-1] = self.y_n-1
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
dfdy = (
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
dfdy = old_div(( ((1-alpha)*self.f_values[x_pos-1,y_pos] + alpha*self.f_values[x_pos,y_pos]) - ((1-alpha)*self.f_values[x_pos-1,y_pos]
-
+ alpha*self.f_values[x_pos-1,y_pos-1]))/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
+ alpha*self.f_values[x_pos-1,y_pos-1])),(self.y_list[y_pos] - self.y_list[y_pos-1])) return dfdy
@@ -1208,9 +1213,9 @@ z_pos = self.zSearchFunc(self.z_list,z) z_pos[z_pos < 1] = 1 z_pos[z_pos > self.z_n-1] = self.z_n-1
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) f = ( (1-alpha)*(1-beta)*(1-gamma)*self.f_values[x_pos-1,y_pos-1,z_pos-1] + (1-alpha)*(1-beta)*gamma*self.f_values[x_pos-1,y_pos-1,z_pos]
@@ -1241,9 +1246,9 @@ z_pos = self.zSearchFunc(self.z_list,z) z_pos[z_pos < 1] = 1 z_pos[z_pos > self.z_n-1] = self.z_n-1
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdx = (
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1]))
-
dfdx = old_div(( ( (1-beta)*(1-gamma)*self.f_values[x_pos,y_pos-1,z_pos-1] + (1-beta)*gamma*self.f_values[x_pos,y_pos-1,z_pos] + beta*(1-gamma)*self.f_values[x_pos,y_pos,z_pos-1]
@@ -1251,7 +1256,7 @@ ( (1-beta)*(1-gamma)self.f_values[x_pos-1,y_pos-1,z_pos-1] + (1-beta)gammaself.f_values[x_pos-1,y_pos-1,z_pos] + beta(1-gamma)*self.f_values[x_pos-1,y_pos,z_pos-1]
-
+ beta*gamma*self.f_values[x_pos-1,y_pos,z_pos]))/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
+ beta*gamma*self.f_values[x_pos-1,y_pos,z_pos])),(self.x_list[x_pos] - self.x_list[x_pos-1])) return dfdx
def _derY(self,x,y,z): @@ -1273,9 +1278,9 @@ z_pos = self.zSearchFunc(self.z_list,z) z_pos[z_pos < 1] = 1 z_pos[z_pos > self.z_n-1] = self.z_n-1
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdy = (
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1]))
-
dfdy = old_div(( ( (1-alpha)*(1-gamma)*self.f_values[x_pos-1,y_pos,z_pos-1] + (1-alpha)*gamma*self.f_values[x_pos-1,y_pos,z_pos] + alpha*(1-gamma)*self.f_values[x_pos,y_pos,z_pos-1]
@@ -1283,7 +1288,7 @@ ( (1-alpha)*(1-gamma)self.f_values[x_pos-1,y_pos-1,z_pos-1] + (1-alpha)gammaself.f_values[x_pos-1,y_pos-1,z_pos] + alpha(1-gamma)*self.f_values[x_pos,y_pos-1,z_pos-1]
-
+ alpha*gamma*self.f_values[x_pos,y_pos-1,z_pos]))/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
+ alpha*gamma*self.f_values[x_pos,y_pos-1,z_pos])),(self.y_list[y_pos] - self.y_list[y_pos-1])) return dfdy
def _derZ(self,x,y,z): @@ -1305,9 +1310,9 @@ z_pos = self.zSearchFunc(self.z_list,z) z_pos[z_pos < 1] = 1 z_pos[z_pos > self.z_n-1] = self.z_n-1
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
dfdz = (
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
dfdz = old_div(( ( (1-alpha)*(1-beta)*self.f_values[x_pos-1,y_pos-1,z_pos] + (1-alpha)*beta*self.f_values[x_pos-1,y_pos,z_pos] + alpha*(1-beta)*self.f_values[x_pos,y_pos-1,z_pos]
@@ -1315,7 +1320,7 @@ ( (1-alpha)*(1-beta)self.f_values[x_pos-1,y_pos-1,z_pos-1] + (1-alpha)betaself.f_values[x_pos-1,y_pos,z_pos-1] + alpha(1-beta)*self.f_values[x_pos,y_pos-1,z_pos-1]
-
+ alpha*beta*self.f_values[x_pos,y_pos,z_pos-1]))/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
+ alpha*beta*self.f_values[x_pos,y_pos,z_pos-1])),(self.z_list[z_pos] - self.z_list[z_pos-1])) return dfdz
@@ -1408,10 +1413,10 @@ j = x_pos k = y_pos l = z_pos
-
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
-
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
-
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
-
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
-
alpha = old_div((w - self.w_list[i-1]),(self.w_list[i] - self.w_list[i-1]))
-
beta = old_div((x - self.x_list[j-1]),(self.x_list[j] - self.x_list[j-1]))
-
gamma = old_div((y - self.y_list[k-1]),(self.y_list[k] - self.y_list[k-1]))
-
delta = old_div((z - self.z_list[l-1]),(self.z_list[l] - self.z_list[l-1])) f = ( (1-alpha)*((1-beta)*((1-gamma)*(1-delta)*self.f_values[i-1,j-1,k-1,l-1] + (1-gamma)*delta*self.f_values[i-1,j-1,k-1,l]
@@ -1458,10 +1463,10 @@ j = x_pos k = y_pos l = z_pos
-
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
-
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
-
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
-
dfdw = (
-
beta = old_div((x - self.x_list[j-1]),(self.x_list[j] - self.x_list[j-1]))
-
gamma = old_div((y - self.y_list[k-1]),(self.y_list[k] - self.y_list[k-1]))
-
delta = old_div((z - self.z_list[l-1]),(self.z_list[l] - self.z_list[l-1]))
-
dfdw = old_div(( ( (1-beta)*(1-gamma)*(1-delta)*self.f_values[i,j-1,k-1,l-1] + (1-beta)*(1-gamma)*delta*self.f_values[i,j-1,k-1,l] + (1-beta)*gamma*(1-delta)*self.f_values[i,j-1,k,l-1]
@@ -1478,7 +1483,7 @@ + beta*(1-gamma)deltaself.f_values[i-1,j,k-1,l] + betagamma(1-delta)self.f_values[i-1,j,k,l-1] + betagammadeltaself.f_values[i-1,j,k,l] )
-
)/(self.w_list[i] - self.w_list[i-1])
-
),(self.w_list[i] - self.w_list[i-1])) return dfdw
def _derX(self,w,x,y,z): @@ -1508,10 +1513,10 @@ j = x_pos k = y_pos l = z_pos
-
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
-
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
-
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
-
dfdx = (
-
alpha = old_div((w - self.w_list[i-1]),(self.w_list[i] - self.w_list[i-1]))
-
gamma = old_div((y - self.y_list[k-1]),(self.y_list[k] - self.y_list[k-1]))
-
delta = old_div((z - self.z_list[l-1]),(self.z_list[l] - self.z_list[l-1]))
-
dfdx = old_div(( ( (1-alpha)*(1-gamma)*(1-delta)*self.f_values[i-1,j,k-1,l-1] + (1-alpha)*(1-gamma)*delta*self.f_values[i-1,j,k-1,l] + (1-alpha)*gamma*(1-delta)*self.f_values[i-1,j,k,l-1]
@@ -1528,7 +1533,7 @@ + alpha*(1-gamma)deltaself.f_values[i,j-1,k-1,l] + alphagamma(1-delta)self.f_values[i,j-1,k,l-1] + alphagammadeltaself.f_values[i,j-1,k,l] )
-
)/(self.x_list[j] - self.x_list[j-1])
-
),(self.x_list[j] - self.x_list[j-1])) return dfdx
def _derY(self,w,x,y,z): @@ -1558,10 +1563,10 @@ j = x_pos k = y_pos l = z_pos
-
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
-
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
-
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
-
dfdy = (
-
alpha = old_div((w - self.w_list[i-1]),(self.w_list[i] - self.w_list[i-1]))
-
beta = old_div((x - self.x_list[j-1]),(self.x_list[j] - self.x_list[j-1]))
-
delta = old_div((z - self.z_list[l-1]),(self.z_list[l] - self.z_list[l-1]))
-
dfdy = old_div(( ( (1-alpha)*(1-beta)*(1-delta)*self.f_values[i-1,j-1,k,l-1] + (1-alpha)*(1-beta)*delta*self.f_values[i-1,j-1,k,l] + (1-alpha)*beta*(1-delta)*self.f_values[i-1,j,k,l-1]
@@ -1578,7 +1583,7 @@ + alpha*(1-beta)deltaself.f_values[i,j-1,k-1,l] + alphabeta(1-delta)self.f_values[i,j,k-1,l-1] + alphabetadeltaself.f_values[i,j,k-1,l] )
-
)/(self.y_list[k] - self.y_list[k-1])
-
),(self.y_list[k] - self.y_list[k-1])) return dfdy
def _derZ(self,w,x,y,z): @@ -1608,10 +1613,10 @@ j = x_pos k = y_pos l = z_pos
-
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
-
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
-
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
-
dfdz = (
-
alpha = old_div((w - self.w_list[i-1]),(self.w_list[i] - self.w_list[i-1]))
-
beta = old_div((x - self.x_list[j-1]),(self.x_list[j] - self.x_list[j-1]))
-
gamma = old_div((y - self.y_list[k-1]),(self.y_list[k] - self.y_list[k-1]))
-
dfdz = old_div(( ( (1-alpha)*(1-beta)*(1-gamma)*self.f_values[i-1,j-1,k-1,l] + (1-alpha)*(1-beta)*gamma*self.f_values[i-1,j-1,k,l] + (1-alpha)*beta*(1-gamma)*self.f_values[i-1,j,k-1,l]
@@ -1628,7 +1633,7 @@ + alpha*(1-beta)gammaself.f_values[i,j-1,k,l-1] + alphabeta(1-gamma)self.f_values[i,j,k-1,l-1] + alphabetagammaself.f_values[i,j,k,l-1] )
-
)/(self.z_list[l] - self.z_list[l-1])
-
),(self.z_list[l] - self.z_list[l-1])) return dfdz
@@ -2193,7 +2198,7 @@ ''' if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1])) f = (1-alpha)*self.xInterpolators[y_pos-1](x) + alpha*self.xInterpolators[y_pos](x) else: m = len(x)
@@ -2202,10 +2207,10 @@ y_pos[y_pos < 1] = 1 f = np.zeros(m) + np.nan if y.size > 0:
-
for i in xrange(1,self.y_n):
-
for i in range(1,self.y_n): c = y_pos == i if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1])) f[c] = (1-alpha)*self.xInterpolators[i-1](x[c]) + alpha*self.xInterpolators[i](x[c]) return f
@@ -2216,7 +2221,7 @@ ''' if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1])) dfdx = (1-alpha)*self.xInterpolators[y_pos-1]._der(x) + alpha*self.xInterpolators[y_pos]._der(x) else: m = len(x)
@@ -2225,10 +2230,10 @@ y_pos[y_pos < 1] = 1 dfdx = np.zeros(m) + np.nan if y.size > 0:
-
for i in xrange(1,self.y_n):
-
for i in range(1,self.y_n): c = y_pos == i if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1])) dfdx[c] = (1-alpha)*self.xInterpolators[i-1]._der(x[c]) + alpha*self.xInterpolators[i]._der(x[c]) return dfdx
@@ -2239,7 +2244,7 @@ ''' if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
-
dfdy = (self.xInterpolators[y_pos](x) - self.xInterpolators[y_pos-1](x))/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
dfdy = old_div((self.xInterpolators[y_pos](x) - self.xInterpolators[y_pos-1](x)),(self.y_list[y_pos] - self.y_list[y_pos-1])) else: m = len(x) y_pos = np.searchsorted(self.y_list,y)
@@ -2247,10 +2252,10 @@ y_pos[y_pos < 1] = 1 dfdy = np.zeros(m) + np.nan if y.size > 0:
-
for i in xrange(1,self.y_n):
-
for i in range(1,self.y_n): c = y_pos == i if np.any(c):
-
dfdy[c] = (self.xInterpolators[i](x[c]) - self.xInterpolators[i-1](x[c]))/(self.y_list[i] - self.y_list[i-1])
-
dfdy[c] = old_div((self.xInterpolators[i](x[c]) - self.xInterpolators[i-1](x[c])),(self.y_list[i] - self.y_list[i-1])) return dfdy
@@ -2295,8 +2300,8 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) f = ((1-alpha)*(1-beta)*self.xInterpolators[y_pos-1][z_pos-1](x) + (1-alpha)*beta*self.xInterpolators[y_pos-1][z_pos](x) + alpha*(1-beta)*self.xInterpolators[y_pos][z_pos-1](x)
@@ -2310,12 +2315,12 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 f = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1])) f[c] = ( (1-alpha)*(1-beta)*self.xInterpolators[i-1][j-1](x[c]) + (1-alpha)*beta*self.xInterpolators[i-1][j](x[c])
@@ -2331,8 +2336,8 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) dfdx = ((1-alpha)*(1-beta)*self.xInterpolators[y_pos-1][z_pos-1]._der(x) + (1-alpha)*beta*self.xInterpolators[y_pos-1][z_pos]._der(x) + alpha*(1-beta)*self.xInterpolators[y_pos][z_pos-1]._der(x)
@@ -2346,12 +2351,12 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdx = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1])) dfdx[c] = ( (1-alpha)*(1-beta)*self.xInterpolators[i-1][j-1]._der(x[c]) + (1-alpha)*beta*self.xInterpolators[i-1][j]._der(x[c])
@@ -2367,9 +2372,9 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdy = (((1-beta)*self.xInterpolators[y_pos][z_pos-1](x) + beta*self.xInterpolators[y_pos][z_pos](x))
-
- ((1-beta)*self.xInterpolators[y_pos-1][z_pos-1](x) + beta*self.xInterpolators[y_pos-1][z_pos](x)))/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1]))
-
dfdy = old_div((((1-beta)*self.xInterpolators[y_pos][z_pos-1](x) + beta*self.xInterpolators[y_pos][z_pos](x))
-
- ((1-beta)*self.xInterpolators[y_pos-1][z_pos-1](x) + beta*self.xInterpolators[y_pos-1][z_pos](x))),(self.y_list[y_pos] - self.y_list[y_pos-1])) else: m = len(x) y_pos = np.searchsorted(self.y_list,y)
@@ -2379,13 +2384,13 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdy = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
dfdy[c] = (((1-beta)*self.xInterpolators[i][j-1](x[c]) + beta*self.xInterpolators[i][j](x[c]))
-
- ((1-beta)*self.xInterpolators[i-1][j-1](x[c]) + beta*self.xInterpolators[i-1][j](x[c])))/(self.y_list[i] - self.y_list[i-1])
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1]))
-
dfdy[c] = old_div((((1-beta)*self.xInterpolators[i][j-1](x[c]) + beta*self.xInterpolators[i][j](x[c]))
-
- ((1-beta)*self.xInterpolators[i-1][j-1](x[c]) + beta*self.xInterpolators[i-1][j](x[c]))),(self.y_list[i] - self.y_list[i-1])) return dfdy
def _derZ(self,x,y,z): @@ -2396,9 +2401,9 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
dfdz = (((1-alpha)*self.xInterpolators[y_pos-1][z_pos](x) + alpha*self.xInterpolators[y_pos][z_pos](x))
-
- ((1-alpha)*self.xInterpolators[y_pos-1][z_pos-1](x) + alpha*self.xInterpolators[y_pos][z_pos-1](x)))/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
dfdz = old_div((((1-alpha)*self.xInterpolators[y_pos-1][z_pos](x) + alpha*self.xInterpolators[y_pos][z_pos](x))
-
- ((1-alpha)*self.xInterpolators[y_pos-1][z_pos-1](x) + alpha*self.xInterpolators[y_pos][z_pos-1](x))),(self.z_list[z_pos] - self.z_list[z_pos-1])) else: m = len(x) y_pos = np.searchsorted(self.y_list,y)
@@ -2408,13 +2413,13 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdz = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
dfdz[c] = (((1-alpha)*self.xInterpolators[i-1][j](x[c]) + alpha*self.xInterpolators[i][j](x[c]))
-
- ((1-alpha)*self.xInterpolators[i-1][j-1](x[c]) + alpha*self.xInterpolators[i][j-1](x[c])))/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
dfdz[c] = old_div((((1-alpha)*self.xInterpolators[i-1][j](x[c]) + alpha*self.xInterpolators[i][j](x[c]))
-
- ((1-alpha)*self.xInterpolators[i-1][j-1](x[c]) + alpha*self.xInterpolators[i][j-1](x[c]))),(self.z_list[j] - self.z_list[j-1])) return dfdz
@@ -2464,9 +2469,9 @@ x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1) y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) f = ( (1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1](w) + (1-alpha)*(1-beta)*gamma*self.wInterpolators[x_pos-1][y_pos-1][z_pos](w)
@@ -2487,14 +2492,14 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 f = np.zeros(m) + np.nan
-
for i in xrange(1,self.x_n):
-
for j in xrange(1,self.y_n):
-
for k in xrange(1,self.z_n):
-
for i in range(1,self.x_n):
-
for j in range(1,self.y_n):
-
for k in range(1,self.z_n): c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos) if np.any(c):
-
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
-
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
-
alpha = old_div((x[c] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1]))
-
beta = old_div((y[c] - self.y_list[j-1]),(self.y_list[j] - self.y_list[j-1]))
-
gamma = old_div((z[c] - self.z_list[k-1]),(self.z_list[k] - self.z_list[k-1])) f[c] = ( (1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[i-1][j-1][k-1](w[c]) + (1-alpha)*(1-beta)*gamma*self.wInterpolators[i-1][j-1][k](w[c])
@@ -2515,9 +2520,9 @@ x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1) y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.x_list[x_pos] - self.x_list[x_pos-1]))
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) dfdw = ( (1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1]._der(w) + (1-alpha)*(1-beta)*gamma*self.wInterpolators[x_pos-1][y_pos-1][z_pos]._der(w)
@@ -2538,14 +2543,14 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdw = np.zeros(m) + np.nan
-
for i in xrange(1,self.x_n):
-
for j in xrange(1,self.y_n):
-
for k in xrange(1,self.z_n):
-
for i in range(1,self.x_n):
-
for j in range(1,self.y_n):
-
for k in range(1,self.z_n): c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos) if np.any(c):
-
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
-
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
-
alpha = old_div((x[c] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1]))
-
beta = old_div((y[c] - self.y_list[j-1]),(self.y_list[j] - self.y_list[j-1]))
-
gamma = old_div((z[c] - self.z_list[k-1]),(self.z_list[k] - self.z_list[k-1])) dfdw[c] = ( (1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[i-1][j-1][k-1]._der(w[c]) + (1-alpha)*(1-beta)*gamma*self.wInterpolators[i-1][j-1][k]._der(w[c])
@@ -2566,9 +2571,9 @@ x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1) y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdx = (
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1]))
-
dfdx = old_div(( ((1-beta)*(1-gamma)*self.wInterpolators[x_pos][y_pos-1][z_pos-1](w) + (1-beta)*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos](w) + beta*(1-gamma)*self.wInterpolators[x_pos][y_pos][z_pos-1](w)
@@ -2576,7 +2581,7 @@ ((1-beta)*(1-gamma)self.wInterpolators[x_pos-1][y_pos-1]z_pos-1 + (1-beta)gammaself.wInterpolators[x_pos-1][y_pos-1]z_pos + beta(1-gamma)*self.wInterpolators[x_pos-1][y_pos]z_pos-1
-
+ beta*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos](w)))/(self.x_list[x_pos] - self.x_list[x_pos-1])
-
+ beta*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos](w))),(self.x_list[x_pos] - self.x_list[x_pos-1])) else: m = len(x) x_pos = np.searchsorted(self.x_list,x)
@@ -2588,14 +2593,14 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdx = np.zeros(m) + np.nan
-
for i in xrange(1,self.x_n):
-
for j in xrange(1,self.y_n):
-
for k in xrange(1,self.z_n):
-
for i in range(1,self.x_n):
-
for j in range(1,self.y_n):
-
for k in range(1,self.z_n): c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos) if np.any(c):
-
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
-
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
-
dfdx[c] = (
-
beta = old_div((y[c] - self.y_list[j-1]),(self.y_list[j] - self.y_list[j-1]))
-
gamma = old_div((z[c] - self.z_list[k-1]),(self.z_list[k] - self.z_list[k-1]))
-
dfdx[c] = old_div(( ((1-beta)*(1-gamma)*self.wInterpolators[i][j-1][k-1](w[c]) + (1-beta)*gamma*self.wInterpolators[i][j-1][k](w[c]) + beta*(1-gamma)*self.wInterpolators[i][j][k-1](w[c])
@@ -2603,7 +2608,7 @@ ((1-beta)*(1-gamma)self.wInterpolators[i-1][j-1]k-1 + (1-beta)gammaself.wInterpolators[i-1][j-1]k + beta(1-gamma)*self.wInterpolators[i-1][j]k-1
-
+ beta*gamma*self.wInterpolators[i-1][j][k](w[c])))/(self.x_list[i] - self.x_list[i-1])
-
+ beta*gamma*self.wInterpolators[i-1][j][k](w[c]))),(self.x_list[i] - self.x_list[i-1])) return dfdx
def _derY(self,w,x,y,z): @@ -2615,9 +2620,9 @@ x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1) y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (x - self.x_list[x_pos-1])/(self.y_list[x_pos] - self.x_list[x_pos-1])
-
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdy = (
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.y_list[x_pos] - self.x_list[x_pos-1]))
-
gamma = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1]))
-
dfdy = old_div(( ((1-alpha)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos][z_pos-1](w) + (1-alpha)*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos](w) + alpha*(1-gamma)*self.wInterpolators[x_pos][y_pos][z_pos-1](w)
@@ -2625,7 +2630,7 @@ ((1-alpha)*(1-gamma)self.wInterpolators[x_pos-1][y_pos-1]z_pos-1 + (1-alpha)gammaself.wInterpolators[x_pos-1][y_pos-1]z_pos + alpha(1-gamma)*self.wInterpolators[x_pos][y_pos-1]z_pos-1
-
+ alpha*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos](w)))/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
+ alpha*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos](w))),(self.y_list[y_pos] - self.y_list[y_pos-1])) else: m = len(x) x_pos = np.searchsorted(self.x_list,x)
@@ -2637,14 +2642,14 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdy = np.zeros(m) + np.nan
-
for i in xrange(1,self.x_n):
-
for j in xrange(1,self.y_n):
-
for k in xrange(1,self.z_n):
-
for i in range(1,self.x_n):
-
for j in range(1,self.y_n):
-
for k in range(1,self.z_n): c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos) if np.any(c):
-
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
-
dfdy[c] = (
-
alpha = old_div((x[c] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1]))
-
gamma = old_div((z[c] - self.z_list[k-1]),(self.z_list[k] - self.z_list[k-1]))
-
dfdy[c] = old_div(( ((1-alpha)*(1-gamma)*self.wInterpolators[i-1][j][k-1](w[c]) + (1-alpha)*gamma*self.wInterpolators[i-1][j][k](w[c]) + alpha*(1-gamma)*self.wInterpolators[i][j][k-1](w[c])
@@ -2652,7 +2657,7 @@ ((1-alpha)*(1-gamma)self.wInterpolators[i-1][j-1]k-1 + (1-alpha)gammaself.wInterpolators[i-1][j-1]k + alpha(1-gamma)*self.wInterpolators[i][j-1]k-1
-
+ alpha*gamma*self.wInterpolators[i][j-1][k](w[c])))/(self.y_list[j] - self.y_list[j-1])
-
+ alpha*gamma*self.wInterpolators[i][j-1][k](w[c]))),(self.y_list[j] - self.y_list[j-1])) return dfdy
def _derZ(self,w,x,y,z): @@ -2664,9 +2669,9 @@ x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1) y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (x - self.x_list[x_pos-1])/(self.y_list[x_pos] - self.x_list[x_pos-1])
-
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
dfdz = (
-
alpha = old_div((x - self.x_list[x_pos-1]),(self.y_list[x_pos] - self.x_list[x_pos-1]))
-
beta = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
dfdz = old_div(( ((1-alpha)*(1-beta)*self.wInterpolators[x_pos-1][y_pos-1][z_pos](w) + (1-alpha)*beta*self.wInterpolators[x_pos-1][y_pos][z_pos](w) + alpha*(1-beta)*self.wInterpolators[x_pos][y_pos-1][z_pos](w)
@@ -2674,7 +2679,7 @@ ((1-alpha)*(1-beta)self.wInterpolators[x_pos-1][y_pos-1]z_pos-1 + (1-alpha)betaself.wInterpolators[x_pos-1][y_pos]z_pos-1 + alpha(1-beta)*self.wInterpolators[x_pos][y_pos-1]z_pos-1
-
+ alpha*beta*self.wInterpolators[x_pos][y_pos][z_pos-1](w)))/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
+ alpha*beta*self.wInterpolators[x_pos][y_pos][z_pos-1](w))),(self.z_list[z_pos] - self.z_list[z_pos-1])) else: m = len(x) x_pos = np.searchsorted(self.x_list,x)
@@ -2686,14 +2691,14 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdz = np.zeros(m) + np.nan
-
for i in xrange(1,self.x_n):
-
for j in xrange(1,self.y_n):
-
for k in xrange(1,self.z_n):
-
for i in range(1,self.x_n):
-
for j in range(1,self.y_n):
-
for k in range(1,self.z_n): c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos) if np.any(c):
-
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
-
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
-
dfdz[c] = (
-
alpha = old_div((x[c] - self.x_list[i-1]),(self.x_list[i] - self.x_list[i-1]))
-
beta = old_div((y[c] - self.y_list[j-1]),(self.y_list[j] - self.y_list[j-1]))
-
dfdz[c] = old_div(( ((1-alpha)*(1-beta)*self.wInterpolators[i-1][j-1][k](w[c]) + (1-alpha)*beta*self.wInterpolators[i-1][j][k](w[c]) + alpha*(1-beta)*self.wInterpolators[i][j-1][k](w[c])
@@ -2701,7 +2706,7 @@ ((1-alpha)*(1-beta)self.wInterpolators[i-1][j-1]k-1 + (1-alpha)betaself.wInterpolators[i-1][j]k-1 + alpha(1-beta)*self.wInterpolators[i][j-1]k-1
-
+ alpha*beta*self.wInterpolators[i][j][k-1](w[c])))/(self.z_list[k] - self.z_list[k-1])
-
+ alpha*beta*self.wInterpolators[i][j][k-1](w[c]))),(self.z_list[k] - self.z_list[k-1])) return dfdz
@@ -2744,7 +2749,7 @@ ''' if _isscalar(x): z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) f = (1-alpha)*self.xyInterpolators[z_pos-1](x,y) + alpha*self.xyInterpolators[z_pos](x,y) else: m = len(x)
@@ -2753,10 +2758,10 @@ z_pos[z_pos < 1] = 1 f = np.zeros(m) + np.nan if x.size > 0:
-
for i in xrange(1,self.z_n):
-
for i in range(1,self.z_n): c = z_pos == i if np.any(c):
-
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
-
alpha = old_div((z[c] - self.z_list[i-1]),(self.z_list[i] - self.z_list[i-1])) f[c] = (1-alpha)*self.xyInterpolators[i-1](x[c],y[c]) + alpha*self.xyInterpolators[i](x[c],y[c]) return f
@@ -2767,7 +2772,7 @@ ''' if _isscalar(x): z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) dfdx = (1-alpha)*self.xyInterpolators[z_pos-1].derivativeX(x,y) + alpha*self.xyInterpolators[z_pos].derivativeX(x,y) else: m = len(x)
@@ -2776,10 +2781,10 @@ z_pos[z_pos < 1] = 1 dfdx = np.zeros(m) + np.nan if x.size > 0:
-
for i in xrange(1,self.z_n):
-
for i in range(1,self.z_n): c = z_pos == i if np.any(c):
-
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
-
alpha = old_div((z[c] - self.z_list[i-1]),(self.z_list[i] - self.z_list[i-1])) dfdx[c] = (1-alpha)*self.xyInterpolators[i-1].derivativeX(x[c],y[c]) + alpha*self.xyInterpolators[i].derivativeX(x[c],y[c]) return dfdx
@@ -2790,7 +2795,7 @@ ''' if _isscalar(x): z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) dfdy = (1-alpha)*self.xyInterpolators[z_pos-1].derivativeY(x,y) + alpha*self.xyInterpolators[z_pos].derivativeY(x,y) else: m = len(x)
@@ -2799,10 +2804,10 @@ z_pos[z_pos < 1] = 1 dfdy = np.zeros(m) + np.nan if x.size > 0:
-
for i in xrange(1,self.z_n):
-
for i in range(1,self.z_n): c = z_pos == i if np.any(c):
-
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
-
alpha = old_div((z[c] - self.z_list[i-1]),(self.z_list[i] - self.z_list[i-1])) dfdy[c] = (1-alpha)*self.xyInterpolators[i-1].derivativeY(x[c],y[c]) + alpha*self.xyInterpolators[i].derivativeY(x[c],y[c]) return dfdy
@@ -2813,7 +2818,7 @@ ''' if _isscalar(x): z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
dfdz = (self.xyInterpolators[z_pos].derivativeX(x,y) - self.xyInterpolators[z_pos-1].derivativeX(x,y))/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdz = old_div((self.xyInterpolators[z_pos].derivativeX(x,y) - self.xyInterpolators[z_pos-1].derivativeX(x,y)),(self.z_list[z_pos] - self.z_list[z_pos-1])) else: m = len(x) z_pos = np.searchsorted(self.z_list,z)
@@ -2821,10 +2826,10 @@ z_pos[z_pos < 1] = 1 dfdz = np.zeros(m) + np.nan if x.size > 0:
-
for i in xrange(1,self.z_n):
-
for i in range(1,self.z_n): c = z_pos == i if np.any(c):
-
dfdz[c] = (self.xyInterpolators[i](x[c],y[c]) - self.xyInterpolators[i-1](x[c],y[c]))/(self.z_list[i] - self.z_list[i-1])
-
dfdz[c] = old_div((self.xyInterpolators[i](x[c],y[c]) - self.xyInterpolators[i-1](x[c],y[c])),(self.z_list[i] - self.z_list[i-1])) return dfdz
class BilinearInterpOnInterp2D(HARKinterpolator4D): @@ -2872,8 +2877,8 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) f = ((1-alpha)*(1-beta)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + (1-alpha)*beta*self.wxInterpolators[y_pos-1][z_pos](w,x) + alpha*(1-beta)*self.wxInterpolators[y_pos][z_pos-1](w,x)
@@ -2887,12 +2892,12 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 f = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1])) f[c] = ( (1-alpha)*(1-beta)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + (1-alpha)*beta*self.wxInterpolators[i-1][j](w[c],x[c])
@@ -2912,8 +2917,8 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) dfdw = ((1-alpha)*(1-beta)*self.wxInterpolators[y_pos-1][z_pos-1].derivativeX(w,x) + (1-alpha)*beta*self.wxInterpolators[y_pos-1][z_pos].derivativeX(w,x) + alpha*(1-beta)*self.wxInterpolators[y_pos][z_pos-1].derivativeX(w,x)
@@ -2927,12 +2932,12 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdw = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1])) dfdw[c] = ( (1-alpha)*(1-beta)*self.wxInterpolators[i-1][j-1].derivativeX(w[c],x[c]) + (1-alpha)*beta*self.wxInterpolators[i-1][j].derivativeX(w[c],x[c])
@@ -2952,8 +2957,8 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1])) dfdx = ((1-alpha)*(1-beta)*self.wxInterpolators[y_pos-1][z_pos-1].derivativeY(w,x) + (1-alpha)*beta*self.wxInterpolators[y_pos-1][z_pos].derivativeY(w,x) + alpha*(1-beta)*self.wxInterpolators[y_pos][z_pos-1].derivativeY(w,x)
@@ -2967,12 +2972,12 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdx = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1])) dfdx[c] = ( (1-alpha)*(1-beta)*self.wxInterpolators[i-1][j-1].derivativeY(w[c],x[c]) + (1-alpha)*beta*self.wxInterpolators[i-1][j].derivativeY(w[c],x[c])
@@ -2988,9 +2993,9 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
dfdy = (((1-beta)*self.wxInterpolators[y_pos][z_pos-1](w,x) + beta*self.wxInterpolators[y_pos][z_pos](w,x))
-
- ((1-beta)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + beta*self.wxInterpolators[y_pos-1][z_pos](w,x)))/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
beta = old_div((z - self.z_list[z_pos-1]),(self.z_list[z_pos] - self.z_list[z_pos-1]))
-
dfdy = old_div((((1-beta)*self.wxInterpolators[y_pos][z_pos-1](w,x) + beta*self.wxInterpolators[y_pos][z_pos](w,x))
-
- ((1-beta)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + beta*self.wxInterpolators[y_pos-1][z_pos](w,x))),(self.y_list[y_pos] - self.y_list[y_pos-1])) else: m = len(x) y_pos = np.searchsorted(self.y_list,y)
@@ -3000,13 +3005,13 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdy = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
-
dfdy[c] = (((1-beta)*self.wxInterpolators[i][j-1](w[c],x[c]) + beta*self.wxInterpolators[i][j](w[c],x[c]))
-
- ((1-beta)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + beta*self.wxInterpolators[i-1][j](w[c],x[c])))/(self.y_list[i] - self.y_list[i-1])
-
beta = old_div((z[c] - self.z_list[j-1]),(self.z_list[j] - self.z_list[j-1]))
-
dfdy[c] = old_div((((1-beta)*self.wxInterpolators[i][j-1](w[c],x[c]) + beta*self.wxInterpolators[i][j](w[c],x[c]))
-
- ((1-beta)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + beta*self.wxInterpolators[i-1][j](w[c],x[c]))),(self.y_list[i] - self.y_list[i-1])) return dfdy
def _derZ(self,w,x,y,z): @@ -3017,9 +3022,9 @@ if _isscalar(x): y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1) z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
-
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
-
dfdz = (((1-alpha)*self.wxInterpolators[y_pos-1][z_pos](w,x) + alpha*self.wxInterpolators[y_pos][z_pos](w,x))
-
- ((1-alpha)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + alpha*self.wxInterpolators[y_pos][z_pos-1](w,x)))/(self.z_list[z_pos] - self.z_list[z_pos-1])
-
alpha = old_div((y - self.y_list[y_pos-1]),(self.y_list[y_pos] - self.y_list[y_pos-1]))
-
dfdz = old_div((((1-alpha)*self.wxInterpolators[y_pos-1][z_pos](w,x) + alpha*self.wxInterpolators[y_pos][z_pos](w,x))
-
- ((1-alpha)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + alpha*self.wxInterpolators[y_pos][z_pos-1](w,x))),(self.z_list[z_pos] - self.z_list[z_pos-1])) else: m = len(x) y_pos = np.searchsorted(self.y_list,y)
@@ -3029,13 +3034,13 @@ z_pos[z_pos > self.z_n-1] = self.z_n-1 z_pos[z_pos < 1] = 1 dfdz = np.zeros(m) + np.nan
-
for i in xrange(1,self.y_n):
-
for j in xrange(1,self.z_n):
-
for i in range(1,self.y_n):
-
for j in range(1,self.z_n): c = np.logical_and(i == y_pos, j == z_pos) if np.any(c):
-
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
-
dfdz[c] = (((1-alpha)*self.wxInterpolators[i-1][j](w[c],x[c]) + alpha*self.wxInterpolators[i][j](w[c],x[c]))
-
- ((1-alpha)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + alpha*self.wxInterpolators[i][j-1](w[c],x[c])))/(self.z_list[j] - self.z_list[j-1])
-
alpha = old_div((y[c] - self.y_list[i-1]),(self.y_list[i] - self.y_list[i-1]))
-
dfdz[c] = old_div((((1-alpha)*self.wxInterpolators[i-1][j](w[c],x[c]) + alpha*self.wxInterpolators[i][j](w[c],x[c]))
-
- ((1-alpha)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + alpha*self.wxInterpolators[i][j-1](w[c],x[c]))),(self.z_list[j] - self.z_list[j-1])) return dfdz
@@ -3235,27 +3240,27 @@ g = (yC-yA) h = (yA-yB-yC+yD) denom = (dg-hc)
-
mu = (h*b-d*f)/denom
-
tau = (h*(a-x) - d*(e-y))/denom
-
mu = old_div((h*b-d*f),denom)
-
tau = old_div((h*(a-x) - d*(e-y)),denom) zeta = a - x + c*tau eta = b + c*mu + d*tau theta = d*mu
-
alpha = (-eta + polarity*np.sqrt(eta**2.0 - 4.0*zeta*theta))/(2.0*theta)
-
alpha = old_div((-eta + polarity*np.sqrt(eta**2.0 - 4.0*zeta*theta)),(2.0*theta)) beta = mu*alpha + tau # Alternate method if there are sectors that are "too regular" z = np.logical_or(np.isnan(alpha),np.isnan(beta)) # These points weren't able to identify coordinates if np.any(z):
-
these = np.isclose(f/b,(yD-yC)/(xD-xC)) # iso-beta lines have equal slope
-
these = np.isclose(old_div(f,b),old_div((yD-yC),(xD-xC))) # iso-beta lines have equal slope if np.any(these):
-
kappa = f[these]/b[these]
-
kappa = old_div(f[these],b[these]) int_bot = yA[these] - kappa*xA[these] int_top = yC[these] - kappa*xC[these] int_these = y[these] - kappa*x[these]
-
beta_temp = (int_these-int_bot)/(int_top-int_bot)
-
beta_temp = old_div((int_these-int_bot),(int_top-int_bot)) x_left = beta_temp*xC[these] + (1.0-beta_temp)*xA[these] x_right = beta_temp*xD[these] + (1.0-beta_temp)*xB[these]
-
alpha_temp= (x[these]-x_left)/(x_right-x_left)
-
alpha_temp= old_div((x[these]-x_left),(x_right-x_left)) beta[these] = beta_temp alpha[these] = alpha_temp
@@ -3309,8 +3314,8 @@
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x*beta_y - beta_x*alpha_y
-
x_alpha = beta_y/det
-
x_beta = -alpha_y/det
-
x_alpha = old_div(beta_y,det)
-
x_beta = old_div(-alpha_y,det) #y_alpha = -beta_x/det #y_beta = alpha_x/det
@@ -3354,8 +3359,8 @@ det = alpha_xbeta_y - beta_xalpha_y #x_alpha = beta_y/det #x_beta = -alpha_y/det
-
y_alpha = -beta_x/det
-
y_beta = alpha_x/det
-
y_alpha = old_div(-beta_x,det)
-
y_beta = old_div(alpha_x,det) # Calculate the derivative of f w.r.t. alpha and beta dfda = (1-beta)*(fB-fA) + beta*(fD-fC)
@@ -3380,7 +3385,7 @@ if False: x = np.linspace(1,20,39) y = np.log(x)
-
dydx = 1.0/x
-
dydx = old_div(1.0,x) f = CubicInterp(x,y,dydx) x_test = np.linspace(0,30,200) y_test = f(x_test)
@@ -3406,16 +3411,16 @@
rand_x = RNG.rand(100)*5.0
rand_y = RNG.rand(100)*5.0
-
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
-
q = (dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y))/dfdx(rand_x,rand_y)
-
r = (dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y))/dfdy(rand_x,rand_y)
-
z = old_div((f(rand_x,rand_y) - g(rand_x,rand_y)),f(rand_x,rand_y))
-
q = old_div((dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y)),dfdx(rand_x,rand_y))
-
r = old_div((dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y)),dfdy(rand_x,rand_y)) #print(z) #print(q) #print(r)
-
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
-
q = (dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y))/dfdx(rand_x,rand_y)
-
r = (dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y))/dfdy(rand_x,rand_y)
-
z = old_div((f(rand_x,rand_y) - g(rand_x,rand_y)),f(rand_x,rand_y))
-
q = old_div((dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y)),dfdx(rand_x,rand_y))
-
r = old_div((dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y)),dfdy(rand_x,rand_y)) print(z) #print(q) #print(r)
@@ -3442,10 +3447,10 @@ rand_x = RNG.rand(1000)*5.0 rand_y = RNG.rand(1000)*5.0 rand_z = RNG.rand(1000)*5.0
-
z = (f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z))/f(rand_x,rand_y,rand_z)
-
q = (dfdx(rand_x,rand_y,rand_z) - g.derivativeX(rand_x,rand_y,rand_z))/dfdx(rand_x,rand_y,rand_z)
-
r = (dfdy(rand_x,rand_y,rand_z) - g.derivativeY(rand_x,rand_y,rand_z))/dfdy(rand_x,rand_y,rand_z)
-
p = (dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z))/dfdz(rand_x,rand_y,rand_z)
-
z = old_div((f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z)),f(rand_x,rand_y,rand_z))
-
q = old_div((dfdx(rand_x,rand_y,rand_z) - g.derivativeX(rand_x,rand_y,rand_z)),dfdx(rand_x,rand_y,rand_z))
-
r = old_div((dfdy(rand_x,rand_y,rand_z) - g.derivativeY(rand_x,rand_y,rand_z)),dfdy(rand_x,rand_y,rand_z))
-
p = old_div((dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z)),dfdz(rand_x,rand_y,rand_z)) z.sort()
@@ -3479,11 +3484,11 @@ rand_y = RNG.rand(N)*5.0 rand_z = RNG.rand(N)*5.0 t_start = clock()
-
z = (f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z))/f(rand_w,rand_x,rand_y,rand_z)
-
q = (dfdw(rand_w,rand_x,rand_y,rand_z) - g.derivativeW(rand_w,rand_x,rand_y,rand_z))/dfdw(rand_w,rand_x,rand_y,rand_z)
-
r = (dfdx(rand_w,rand_x,rand_y,rand_z) - g.derivativeX(rand_w,rand_x,rand_y,rand_z))/dfdx(rand_w,rand_x,rand_y,rand_z)
-
p = (dfdy(rand_w,rand_x,rand_y,rand_z) - g.derivativeY(rand_w,rand_x,rand_y,rand_z))/dfdy(rand_w,rand_x,rand_y,rand_z)
-
s = (dfdz(rand_w,rand_x,rand_y,rand_z) - g.derivativeZ(rand_w,rand_x,rand_y,rand_z))/dfdz(rand_w,rand_x,rand_y,rand_z)
-
z = old_div((f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z)),f(rand_w,rand_x,rand_y,rand_z))
-
q = old_div((dfdw(rand_w,rand_x,rand_y,rand_z) - g.derivativeW(rand_w,rand_x,rand_y,rand_z)),dfdw(rand_w,rand_x,rand_y,rand_z))
-
r = old_div((dfdx(rand_w,rand_x,rand_y,rand_z) - g.derivativeX(rand_w,rand_x,rand_y,rand_z)),dfdx(rand_w,rand_x,rand_y,rand_z))
-
p = old_div((dfdy(rand_w,rand_x,rand_y,rand_z) - g.derivativeY(rand_w,rand_x,rand_y,rand_z)),dfdy(rand_w,rand_x,rand_y,rand_z))
-
s = old_div((dfdz(rand_w,rand_x,rand_y,rand_z) - g.derivativeZ(rand_w,rand_x,rand_y,rand_z)),dfdz(rand_w,rand_x,rand_y,rand_z)) t_end = clock() z.sort()
@@ -3502,8 +3507,8 @@
rand_x = RNG.rand(100)*5.0
rand_y = RNG.rand(100)*5.0
-
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
-
q = (f(x_temp,y_temp) - g(x_temp,y_temp))/f(x_temp,y_temp)
-
z = old_div((f(rand_x,rand_y) - g(rand_x,rand_y)),f(rand_x,rand_y))
-
q = old_div((f(x_temp,y_temp) - g(x_temp,y_temp)),f(x_temp,y_temp)) #print(z) #print(q)
@@ -3523,10 +3528,10 @@ rand_x = RNG.rand(1000)*5.0 rand_y = RNG.rand(1000)*5.0 rand_z = RNG.rand(1000)*5.0
-
z = (f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z))/f(rand_x,rand_y,rand_z)
-
q = (dfdx(rand_x,rand_y,rand_z) - g.derivativeX(rand_x,rand_y,rand_z))/dfdx(rand_x,rand_y,rand_z)
-
r = (dfdy(rand_x,rand_y,rand_z) - g.derivativeY(rand_x,rand_y,rand_z))/dfdy(rand_x,rand_y,rand_z)
-
p = (dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z))/dfdz(rand_x,rand_y,rand_z)
-
z = old_div((f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z)),f(rand_x,rand_y,rand_z))
-
q = old_div((dfdx(rand_x,rand_y,rand_z) - g.derivativeX(rand_x,rand_y,rand_z)),dfdx(rand_x,rand_y,rand_z))
-
r = old_div((dfdy(rand_x,rand_y,rand_z) - g.derivativeY(rand_x,rand_y,rand_z)),dfdy(rand_x,rand_y,rand_z))
-
p = old_div((dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z)),dfdz(rand_x,rand_y,rand_z)) p.sort() plt.plot(p)
@@ -3552,7 +3557,7 @@ rand_y = RNG.rand(N)*5.0 rand_z = RNG.rand(N)*5.0 t_start = clock()
-
z = (f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z))/f(rand_w,rand_x,rand_y,rand_z)
-
z = old_div((f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z)),f(rand_w,rand_x,rand_y,rand_z)) t_end = clock() #print(z) print(t_end-t_start)
@@ -3574,9 +3579,9 @@ rand_x = RNG.rand(1000)*5.0 rand_y = RNG.rand(1000)*5.0 t_start = clock()
-
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
-
q = (dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y))/dfdx(rand_x,rand_y)
-
r = (dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y))/dfdy(rand_x,rand_y)
-
z = old_div((f(rand_x,rand_y) - g(rand_x,rand_y)),f(rand_x,rand_y))
-
q = old_div((dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y)),dfdx(rand_x,rand_y))
-
r = old_div((dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y)),dfdy(rand_x,rand_y)) t_end = clock() z.sort() q.sort()
@@ -3609,8 +3614,8 @@ rand_x = RNG.rand(N)*5.0 rand_y = RNG.rand(N)*5.0 rand_z = RNG.rand(N)*5.0
-
z = (f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z))/f(rand_x,rand_y,rand_z)
-
p = (dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z))/dfdz(rand_x,rand_y,rand_z)
-
z = old_div((f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z)),f(rand_x,rand_y,rand_z))
-
p = old_div((dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z)),dfdz(rand_x,rand_y,rand_z)) p.sort() plt.plot(p)
@@ -3648,7 +3653,7 @@ rand_z = RNG.rand(N)*5.0
t_start = clock()
-
z = (f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z))/f(rand_w,rand_x,rand_y,rand_z)
-
z = old_div((f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z)),f(rand_w,rand_x,rand_y,rand_z)) t_end = clock() z.sort() print(z)