Search results for: absolute time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17996

Search results for: absolute time

17816 Construction Time - Cost Trade-Off Analysis Using Fuzzy Set Theory

Authors: V. S. S. Kumar, B. Vikram, G. C. S. Reddy

Abstract:

Time and cost are the two critical objectives of construction project management and are not independent but intricately related. Trade-off between project duration and cost are extensively discussed during project scheduling because of practical relevance. Generally when the project duration is compressed, the project calls for an increase in labor and more productive equipments, which increases the cost. Thus, the construction time-cost optimization is defined as a process to identify suitable construction activities for speeding up to attain the best possible savings in both time and cost. As there is hidden tradeoff relationship between project time and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of compressing the schedule. Different combinations of duration and cost for the activities associated with the project determine the best set in the time-cost optimization. Therefore, the contractors need to select the best combination of time and cost to perform each activity, all of which will ultimately determine the project duration and cost. In this paper, the fuzzy set theory is used to model the uncertainties in the project environment for time-cost trade off analysis.

Keywords: fuzzy sets, uncertainty, qualitative factors, decision making

Procedia PDF Downloads 619
17815 Feeling Sorry for Some Creditors

Authors: Hans Tjio, Wee Meng Seng

Abstract:

The interaction of contract and property has always been a concern in corporate and commercial law, where there are internal structures created that may not match the externally perceived image generated by the labels attached to those structures. We will focus, in particular, on the priority structures created by affirmative asset partitioning, which have increasingly come under challenge by those attempting to negotiate around them. The most prominent has been the AT1 bonds issued by Credit Suisse which were wiped out before its equity when the troubled bank was acquired by UBS. However, this should not have come as a surprise to those whose “bonds” had similarly been “redeemed” upon the occurrence of certain reference events in countries like Singapore, Hong Kong and Taiwan during their Minibond crisis linked to US sub-prime defaults. These were derivatives classified as debentures and sold as such. At the same time, we are again witnessing “liabilities” seemingly ranking higher up the balance sheet ladder, finding themselves lowered in events of default. We will examine the mechanisms holders of perpetual securities or preference shares have tried to use to protect themselves. This is happening against a backdrop that sees a rise in the strength of private credit and inter-creditor conflicts. The restructuring regime of the hybrid scheme in Singapore now, while adopting the absolute priority rule in Chapter 11 as the quid pro quo for creditor cramdown, does not apply to shareholders and so exempts them from cramdown. Complicating the picture further, shareholders are not exempted from cramdown in the Dutch scheme, but it adopts a relative priority rule. At the same time, the important UK Supreme Court decision in BTI 2014 LLC v Sequana [2022] UKSC 25 has held that directors’ duties to take account of creditor interests are activated only when a company is almost insolvent. All this has been complicated by digital assets created by businesses. Investors are quite happy to have them classified as property (like a thing) when it comes to their transferability, but then when the issuer defaults to have them seen as a claim on the business (as a choice in action), that puts them at the level of a creditor. But these hidden interests will not show themselves on an issuer’s balance sheet until it is too late to be considered and yet if accepted, may also prevent any meaningful restructuring.

Keywords: asset partitioning, creditor priority, restructuring, BTI v Sequana, digital assets

Procedia PDF Downloads 41
17814 Distributed Perceptually Important Point Identification for Time Series Data Mining

Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung

Abstract:

In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.

Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining

Procedia PDF Downloads 394
17813 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 76
17812 Characterization on Molecular Weight of Polyamic Acids Using GPC Coupled with Multiple Detectors

Authors: Mei Hong, Wei Liu, Xuemin Dai, Yanxiong Pan, Xiangling Ji

Abstract:

Polyamic acid (PAA) is the precursor of polyimide (PI) prepared by a two-step method, its molecular weight and molecular weight distribution not only play an important role during the preparation and processing, but also influence the final performance of PI. However, precise characterization on molecular weight of PAA is still a challenge because of the existence of very complicated interactions in the solution system, including the electrostatic interaction, hydrogen bond interaction, dipole-dipole interaction, etc. Thus, it is necessary to establisha suitable strategy which can completely suppress these complex effects and get reasonable data on molecular weight. Herein, the gel permeation chromatography (GPC) coupled with differential refractive index (RI) and multi-angle laser light scattering (MALLS) detectors were applied to measure the molecular weight of (6FDA-DMB) PAA using different mobile phases, LiBr/DMF, LiBr/H3PO4/THF/DMF, LiBr/HAc/THF/DMF, and LiBr/HAc/DMF, respectively. It was found that combination of LiBr with HAc can shield the above-mentioned complex interactions and is more conducive to the separation of PAA than only addition of LiBr in DMF. LiBr/HAc/DMF was employed for the first time as a mild mobile phase to effectively separate PAA and determine its molecular weight. After a series of conditional experiments, 0.02M LiBr/0.2M HAc/DMF was fixed as an optimized mobile phase to measure the relative and absolute molecular weights of (6FDA-DMB) PAA prepared, and the obtained Mw from GPC-MALLS and GPC-RI were 35,300 g/mol and 125,000 g/mol, respectively. Particularly, such a mobile phase is also applicable to other PAA samples with different structures, and the final results on molecular weight are also reproducible.

Keywords: Polyamic acids, Polyelectrolyte effects, Gel permeation chromatography, Mobile phase, Molecular weight

Procedia PDF Downloads 22
17811 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO model

Keywords: DEA, Super-efficiency, Time Lag, research activities

Procedia PDF Downloads 623
17810 Modern Scotland Yard: Improving Surveillance Policies Using Adversarial Agent-Based Modelling and Reinforcement Learning

Authors: Olaf Visker, Arnout De Vries, Lambert Schomaker

Abstract:

Predictive policing refers to the usage of analytical techniques to identify potential criminal activity. It has been widely implemented by various police departments. Being a relatively new area of research, there are, to the author’s knowledge, no absolute tried, and true methods and they still exhibit a variety of potential problems. One of those problems is closely related to the lack of understanding of how acting on these prediction influence crime itself. The goal of law enforcement is ultimately crime reduction. As such, a policy needs to be established that best facilitates this goal. This research aims to find such a policy by using adversarial agent-based modeling in combination with modern reinforcement learning techniques. It is presented here that a baseline model for both law enforcement and criminal agents and compare their performance to their respective reinforcement models. The experiments show that our smart law enforcement model is capable of reducing crime by making more deliberate choices regarding the locations of potential criminal activity. Furthermore, it is shown that the smart criminal model presents behavior consistent with popular crime theories and outperforms the baseline model in terms of crimes committed and time to capture. It does, however, still suffer from the difficulties of capturing long term rewards and learning how to handle multiple opposing goals.

Keywords: adversarial, agent based modelling, predictive policing, reinforcement learning

Procedia PDF Downloads 124
17809 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, super-efficiency, time lag, multi-periods input

Procedia PDF Downloads 446
17808 Optimization of Process Parameters and Modeling of Mass Transport during Hybrid Solar Drying of Paddy

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying is one of the most critical unit operations for prolonging the shelf-life of food grains in order to ensure global food security. Photovoltaic integrated solar dryers can be a sustainable solution for replacing energy intensive thermal dryers as it is capable of drying in off-sunshine hours and provide better control over drying conditions. But, performance and reliability of PV based solar dryers depend hugely on climatic conditions thereby, drastically affecting process parameters. Therefore, to ensure quality and prolonged shelf-life of paddy, optimization of process parameters for solar dryers is critical. Proper moisture distribution within the grains is most detrimental factor to enhance the shelf-life of paddy therefore; modeling of mass transport can help in providing a better insight of moisture migration. Hence, present work aims at optimizing the process parameters and to develop a 3D finite element model (FEM) for predicting moisture profile in paddy during solar drying. Optimization of process parameters (power level, air velocity and moisture content) was done using box Behnken model in Design expert software. Furthermore, COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Optimized model for drying paddy was found to be 700W, 2.75 m/s and 13% wb with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Furthermore, 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product to achieve global food and energy security

Keywords: finite element modeling, hybrid solar drying, mass transport, paddy, process optimization

Procedia PDF Downloads 115
17807 Time Compression in Engineer-to-Order Industry: A Case Study of a Norwegian Shipbuilding Industry

Authors: Tarek Fatouh, Chehab Elbelehy, Alaa Abdelsalam, Eman Elakkad, Alaa Abdelshafie

Abstract:

This paper aims to explore the possibility of time compression in Engineer to Order production networks. A case study research method is used in a Norwegian shipbuilding project by implementing a value stream mapping lean tool with total cycle time as a unit of analysis. The analysis resulted in demonstrating the time deviations for the planned tasks in one of the processes in the shipbuilding project. So, authors developed a future state map by removing time wastes from value stream process.

Keywords: engineer to order, total cycle time, value stream mapping, shipbuilding

Procedia PDF Downloads 125
17806 Time and Kinematics of Moving Bodies

Authors: Muhammad Omer Farooq Saeed

Abstract:

The purpose of the proposal is to find out what time actually is! And to understand the natural phenomenon of the behavior of time and light corresponding to the motion of the bodies at relatively high speeds. The utmost concern of the paper is to deal with the possible demerits in the equations of relativity, thereby providing some valuable extensions in those equations and concepts. The idea used develops the most basic conception of the relative motion of the body with respect to space and a real understanding of time and the variation of energy of the body in different frames of reference. The results show the development of a completely new understanding of time, relative motion and energy, along with some extensions in the equations of special relativity most importantly the time dilation and the mass-energy relationship that will explain all frames of a body, all in one go. The proposal also raises serious questions on the validity of the “Principle of Equivalence” on which the General Relativity is based, most importantly a serious case of the bending light that eventually goes against its own governing concepts of space-time being proposed in the theory. The results also predict the existence of a completely new field that explains the fact just how and why bodies acquire energy in space-time. This field explains the production of gravitational waves based on time. All in all, this proposal challenges the formulas and conceptions of Special and General Relativity, respectively.

Keywords: time, relative motion, energy, speed, frame of reference, photon, curvature, space-time, time –differentials

Procedia PDF Downloads 41
17805 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty

Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton

Abstract:

Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.

Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique

Procedia PDF Downloads 50
17804 Modeling User Departure Time Choice for Trips in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Modeling users’ decisions on departure time choice is the main motivation for this research. In particular, it examines the impact of social-demographic features, household, job characteristics and trip qualities on individuals’ departure time choice. Departure time alternatives are presented as adjacent discrete time periods. The choice between these alternatives is done using a discrete choice model. Since a great deal of early morning trips and traffic congestion at that time of the day comprise work trips, the focus of this study is on the work trip over the entire day. Therefore, this study by using questionnaire of stated preference models users’ departure time choice affected by congestion pricing plan in downtown Tehran. Experimental results demonstrate efficient social-demographic impact on work trips’ departure time. These findings have substantial outcomes for the analysis of transportation planning. Particularly, the analysis shows that ignoring the effects of these variables could result in erroneous information and consequently decisions in the field of transportation planning and air quality would fail and cause financial resources loss.

Keywords: modeling, departure time, travel timing, time of the day, congestion pricing, transportation planning

Procedia PDF Downloads 408
17803 Subjective Time as a Marker of the Present Consciousness

Authors: Anastasiya Paltarzhitskaya

Abstract:

Subjective time plays an important role in consciousness processes and self-awareness at the moment. The concept of intrinsic neural timescales (INT) explains the difference in perceiving various time intervals. The capacity to experience the present builds on the fundamental properties of temporal cognition. The challenge that both philosophy and neuroscience try to answer is how the brain differentiates the present from the past and future. In our work, we analyze papers which describe mechanisms involved in the perception of ‘present’ and ‘non-present’, i.e., future and past moments. Taking into account that we perceive time intervals even during rest or relaxation, we suppose that the default-mode network activity can code time features, including the present moment. We can compare some results of time perceptual studies, where brain activity was shown in states with different flows of time, including resting states and during “mental time travel”. According to the concept of mental traveling, we employ a range of scenarios which demand episodic memory. However, some papers show that the hippocampal region does not activate during time traveling. It is a controversial result that is further complicated by the phenomenological aspect that includes a holistic set of information about the individual’s past and future.

Keywords: temporal consciousness, time perception, memory, present

Procedia PDF Downloads 37
17802 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network

Authors: Yasaman Sanayei, Alireza Bahiraie

Abstract:

This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.

Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis

Procedia PDF Downloads 387
17801 Issues of Time's Urgency and Ritual in Children's Picture Books: A Closer Look at the Contributions of Grandparents

Authors: Karen Armstrong

Abstract:

Although invisible and fleeting, time is an essential variable in perception. Ritual is proposed as an antithesis to the passage of time, a way of linking our narratives with the past, present and future. This qualitative exploration examines a variety of award winning twentieth-century children’s picture books, specifically regarding the issues of time’s urgency and ritual with respect to children and grandparents. The paper will begin with a consideration of issues of time from the area of psychology, with regard to age, specifically contrasting later age and childhood. Next the value of ritual as represented by the presence of grandparents in children’s books. Specific instances of the contributions of grandparents or older adults with regard to this balancing function between time’s urgency and ritual will be discussed. Recommendations for future research include a consideration of grandparents’ or older characters’ depictions in books for older children.

Keywords: children's picture books, grandparents, ritual, time

Procedia PDF Downloads 277
17800 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content

Authors: Moses Kolade Ogun, Ina Korner

Abstract:

To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.

Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content

Procedia PDF Downloads 133
17799 Calculation of the Thermal Stresses in an Elastoplastic Plate Heated by Local Heat Source

Authors: M. Khaing, A. V. Tkacheva

Abstract:

The work is devoted to solving the problem of temperature stresses, caused by the heating point of the round plate. The plate is made of elastoplastic material, so the Prandtl-Reis model is used. A piecewise-linear condition of the Ishlinsky-Ivlev flow is taken as the loading surface, in which the yield stress depends on the temperature. Piecewise-linear conditions (Treska or Ishlinsky-Ivlev), in contrast to the Mises condition, make it possible to obtain solutions of the equilibrium equation in an analytical form. In the problem under consideration, using the conditions of Tresca, it is impossible to obtain a solution. This is due to the fact that the equation of equilibrium ceases to be satisfied when the two Tresca conditions are fulfilled at once. Using the conditions of plastic flow Ishlinsky-Ivlev allows one to solve the problem. At the same time, there are also no solutions on the edge of the Ishlinsky-Ivlev hexagon in the plane-stressed state. Therefore, the authors of the article propose to jump from the edge to the edge of the mine edge, which gives an opportunity to obtain an analytical solution. At the same time, there is also no solution on the edge of the Ishlinsky-Ivlev hexagon in a plane stressed state; therefore, in this paper, the authors of the article propose to jump from the side to the side of the mine edge, which gives an opportunity to receive an analytical solution. The paper compares solutions of the problem of plate thermal deformation. One of the solutions was obtained under the condition that the elastic moduli (Young's modulus, Poisson's ratio) which depend on temperature. The yield point is assumed to be parabolically temperature dependent. The main results of the comparisons are that the region of irreversible deformation is larger in the calculations obtained for solving the problem with constant elastic moduli. There is no repeated plastic flow in the solution of the problem with elastic moduli depending on temperature. The absolute value of the irreversible deformations is higher for the solution of the problem in which the elastic moduli are constant; there are also insignificant differences in the distribution of the residual stresses.

Keywords: temperature stresses, elasticity, plasticity, Ishlinsky-Ivlev condition, plate, annular heating, elastic moduli

Procedia PDF Downloads 119
17798 A Postmodern Framework for Quranic Hermeneutics

Authors: Christiane Paulus

Abstract:

Post-Islamism assumes that the Quran should not be viewed in terms of what Lyotard identifies as a ‘meta-narrative'. However, its socio-ethical content can be viewed as critical of power discourse (Foucault). Practicing religion seems to be limited to rites and individual spirituality, taqwa. Alternatively, can we build on Muhammad Abduh's classic-modern reform and develop it through a postmodernist frame? This is the main question of this study. Through his general and vague remarks on the context of the Quran, Abduh was the first to refer to the historical and cultural distance of the text as an obstacle for interpretation. His application, however, corresponded to the modern absolute idea of authentic sharia. He was followed by Amin al-Khuli, who hermeneutically linked the content of the Quran to the theory of evolution. Fazlur Rahman and Nasr Hamid abu Zeid remain reluctant to go beyond the general level in terms of context. The hermeneutic circle, therefore, persists in challenging, how to get out to overcome one’s own assumptions. The insight into and the acceptance of the lasting ambivalence of understanding can be grasped as a postmodern approach; it is documented in Derrida's discovery of the shift in text meanings, difference, also in Lyotard's theory of différend. The resulting mixture of meanings (Wolfgang Welsch) can be read together with the classic ambiguity of the premodern interpreters of the Quran (Thomas Bauer). Confronting hermeneutic difficulties in general, Niklas Luhmann proves every description an attribution, tautology, i.e., remaining in the circle. ‘De-tautologization’ is possible, namely by analyzing the distinctions in the sense of objective, temporal and social information that every text contains. This could be expanded with the Kantian aesthetic dimension of reason (critique of pure judgment) corresponding to the iʽgaz of the Coran. Luhmann asks, ‘What distinction does the observer/author make?’ Quran as a speech from God to the first listeners could be seen as a discourse responding to the problems of everyday life of that time, which can be viewed as the general goal of the entire Qoran. Through reconstructing koranic Lifeworlds (Alfred Schütz) in detail, the social structure crystallizes the socio-economic differences, the enormous poverty. The koranic instruction to provide the basic needs for the neglected groups, which often intersect (old, poor, slaves, women, children), can be seen immediately in the text. First, the references to lifeworlds/social problems and discourses in longer koranic passages should be hypothesized. Subsequently, information from the classic commentaries could be extracted, the classical Tafseer, in particular, contains rich narrative material for reconstructing. By selecting and assigning suitable, specific context information, the meaning of the description becomes condensed (Clifford Geertz). In this manner, the text gets necessarily an alienation and is newly accessible. The socio-ethical implications can thus be grasped from the difference of the original problem and the revealed/improved order/procedure; this small step can be materialized as such, not as an absolute solution but as offering plausible patterns for today’s challenges as the Agenda 2030.

Keywords: postmodern hermeneutics, condensed description, sociological approach, small steps of reform

Procedia PDF Downloads 187
17797 Shoulder Range of Motion Measurements using Computer Vision Compared to Hand-Held Goniometric Measurements

Authors: Lakshmi Sujeesh, Aaron Ramzeen, Ricky Ziming Guo, Abhishek Agrawal

Abstract:

Introduction: Range of motion (ROM) is often measured by physiotherapists using hand-held goniometer as part of mobility assessment for diagnosis. Due to the nature of hand-held goniometer measurement procedure, readings often tend to have some variations depending on the physical therapist taking the measurements (Riddle et al.). This study aims to validate computer vision software readings against goniometric measurements for quick and consistent ROM measurements to be taken by clinicians. The use of this computer vision software hopes to improve the future of musculoskeletal space with more efficient diagnosis from recording of patient’s ROM with minimal human error across different physical therapists. Methods: Using the hand-held long arm goniometer measurements as the “gold-standard”, healthy study participants (n = 20) were made to perform 4 exercises: Front elevation, Abduction, Internal Rotation, and External Rotation, using both arms. Assessment of active ROM using computer vision software at different angles set by goniometer for each exercise was done. Interclass Correlation Coefficient (ICC) using 2-way random effects model, Box-Whisker plots, and Root Mean Square error (RMSE) were used to find the degree of correlation and absolute error measured between set and recorded angles across the repeated trials by the same rater. Results: ICC (2,1) values for all 4 exercises are above 0.9, indicating excellent reliability. Lowest overall RMSE was for external rotation (5.67°) and highest for front elevation (8.00°). Box-whisker plots showed have showed that there is a potential zero error in the measurements done by the computer vision software for abduction, where absolute error for measurements taken at 0 degree are shifted away from the ideal 0 line, with its lowest recorded error being 8°. Conclusion: Our results indicate that the use of computer vision software is valid and reliable to use in clinical settings by physiotherapists for measuring shoulder ROM. Overall, computer vision helps improve accessibility to quality care provided for individual patients, with the ability to assess ROM for their condition at home throughout a full cycle of musculoskeletal care (American Academy of Orthopaedic Surgeons) without the need for a trained therapist.

Keywords: physiotherapy, frozen shoulder, joint range of motion, computer vision

Procedia PDF Downloads 69
17796 The Quantity and Quality of Teacher Talking Time in EFL Classroom

Authors: Hanan Abufares Elkhimry

Abstract:

Looking for more effective teaching and learning approaches, teaching instructors have been telling trainee teachers to decrease their talking time, but the problem is how best to do this. Doing classroom research, specifically in the area of teacher talking time (TTT), is worthwhile, as it could improve the quality of teaching languages, as the learners are the ones who should be practicing and using the language. This work hopes to ascertain if teachers consider this need in a way that provides the students with the opportunities to increase their production of language. This is a question that is worthwhile answering. As many researchers have found, TTT should be decreased to 30% of classroom talking time and STT should be increased up to 70%. Other researchers agree with this, but add that it should be with awareness of the quality of teacher talking time. Therefore, this study intends to investigate the balance between quantity and quality of teacher talking time in the EFL classroom. For this piece of research and in order to capture the amount of talking in a four classrooms. The amount of talking time was measured. A Checklist was used to assess the quality of the talking time In conclusion, In order to improve the quality of TTT, the results showed that teachers may use more or less than 30% of the classroom talking time and still produce a successful classroom learning experience. As well as, the important factors that can affect TTT is the English level of the students. This was clear in the classroom observations, where the highest TTT recorded was with the lowest English level group.

Keywords: teacher talking time TTT, learning experience, classroom research, effective teaching

Procedia PDF Downloads 387
17795 Effects of Polymer Adsorption and Desorption on Polymer Flooding in Waterflooded Reservoir

Authors: Sukruthai Sapniwat, Falan Srisuriyachai

Abstract:

Polymer Flooding is one of the most well-known methods in Enhanced Oil Recovery (EOR) technology which can be implemented after either primary or secondary recovery, resulting in favorable conditions for the displacement mechanism in order to lower the residual oil in the reservoir. Polymer substances can lower the mobility ratio of the whole process by increasing the viscosity of injected water. Therefore, polymer flooding can increase volumetric sweep efficiency, which leads to a better recovery factor. Moreover, polymer adsorption onto rock surface can help decrease reservoir permeability contrast with high heterogeneity. Due to the reduction of the absolute permeability, effective permeability to water, representing flow ability of the injected fluid, is also reduced. Once polymer is adsorbed onto rock surface, polymer molecule can be desorbed when different fluids are injected. This study is performed to evaluate the effects of the adsorption and desorption process of polymer solutions to yield benefits on the oil recovery mechanism. A reservoir model is constructed by reservoir simulation program called STAR® commercialized by the Computer Modeling Group (CMG). Various polymer concentrations, starting times of polymer flooding process and polymer injection rates were evaluated with selected values of polymer desorption degrees including 0, 25, 50, 75 and 100%. The higher the value, the more adsorbed polymer molecules to return back to flowing fluid. According to the results, polymer desorption lowers polymer consumption, especially at low concentrations. Furthermore, starting time of polymer flooding and injection rate affect the oil production. The results show that waterflooding followed by earlier polymer flooding can increase the oil recovery factor while the higher injection rate also enhances the recovery. Polymer concentration is related to polymer consumption due to the two main benefits of polymer flooding control described above. Therefore, polymer slug size should be optimized based on polymer concentration. Polymer desorption causes polymer re-employment that is previously adsorbed onto rock surface, resulting in an increase of sweep efficiency in the further period of polymer flooding process. Even though waterflooding supports polymer injectivity, water cut at the producer can prematurely terminate the oil production. The injection rate decreases polymer adsorption due to decreased retention time of polymer flooding process.

Keywords: enhanced oil recovery technology, polymer adsorption and desorption, polymer flooding, reservoir simulation

Procedia PDF Downloads 290
17794 Brain Age Prediction Based on Brain Magnetic Resonance Imaging by 3D Convolutional Neural Network

Authors: Leila Keshavarz Afshar, Hedieh Sajedi

Abstract:

Estimation of biological brain age from MR images is a topic that has been much addressed in recent years due to the importance it attaches to early diagnosis of diseases such as Alzheimer's. In this paper, we use a 3D Convolutional Neural Network (CNN) to provide a method for estimating the biological age of the brain. The 3D-CNN model is trained by MRI data that has been normalized. In addition, to reduce computation while saving overall performance, some effectual slices are selected for age estimation. By this method, the biological age of individuals using selected normalized data was estimated with Mean Absolute Error (MAE) of 4.82 years.

Keywords: brain age estimation, biological age, 3D-CNN, deep learning, T1-weighted image, SPM, preprocessing, MRI, canny, gray matter

Procedia PDF Downloads 117
17793 Modeling and Simulation Methods Using MATLAB/Simulink

Authors: Jamuna Konda, Umamaheswara Reddy Karumuri, Sriramya Muthugi, Varun Pishati, Ravi Shakya,

Abstract:

This paper investigates the challenges involved in mathematical modeling of plant simulation models ensuring the performance of the plant models much closer to the real time physical model. The paper includes the analysis performed and investigation on different methods of modeling, design and development for plant model. Issues which impact the design time, model accuracy as real time model, tool dependence are analyzed. The real time hardware plant would be a combination of multiple physical models. It is more challenging to test the complete system with all possible test scenarios. There are possibilities of failure or damage of the system due to any unwanted test execution on real time.

Keywords: model based design (MBD), MATLAB, Simulink, stateflow, plant model, real time model, real-time workshop (RTW), target language compiler (TLC)

Procedia PDF Downloads 319
17792 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment

Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali

Abstract:

Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.

Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells

Procedia PDF Downloads 57
17791 Early Modern Controversies of Mobility within the Spanish Empire: Francisco De Vitoria and the Peaceful Right to Travel

Authors: Beatriz Salamanca

Abstract:

In his public lecture ‘On the American Indians’ given at the University of Salamanca in 1538-39, Francisco de Vitoria presented an unsettling defense of freedom of movement, arguing that the Spanish had the right to travel and dwell in the New World, since it was considered part of the law of nations [ius gentium] that men enjoyed free mutual intercourse anywhere they went. The principle of freedom of movement brought hopeful expectations, promising to bring mankind together and strengthen the ties of fraternity. However, it led to polemical situations when those whose mobility was in question represented a harmful threat or was for some reason undesired. In this context, Vitoria’s argument has been seen on multiple occasions as a justification of the expansion of the Spanish empire. In order to examine the meaning of Vitoria’s defense of free mobility, a more detailed look at Vitoria’s text is required, together with the study of some of his earliest works, among them, his commentaries on Thomas Aquinas’s Summa Theologiae, where he presented relevant insights on the idea of the law of nations. In addition, it is necessary to place Vitoria’s work in the context of the intellectual tradition he belonged to and the responses he obtained from some of his contemporaries who were concerned with similar issues. The claim of this research is that the Spanish right to travel advocated by Vitoria was not intended to be interpreted in absolute terms, for it had to serve the purpose of bringing peace and unity among men, and could not contradict natural law. In addition, Vitoria explicitly observed that the right to travel was only valid if the Spaniards caused no harm, a condition that has been underestimated by his critics. Therefore, Vitoria’s legacy is of enormous value as it initiated a long lasting discussion regarding the question of the grounds under which human mobility could be restricted. Again, under Vitoria’s argument it was clear that this freedom was not absolute, but the controversial nature of his defense of Spanish mobility demonstrates how difficult it was and still is to address the issue of the circulation of peoples across frontiers, and shows the significance of this discussion in today’s globalized world, where the rights and wrongs of notions like immigration, international trade or foreign intervention still lack sufficient consensus. This inquiry about Vitoria’s defense of the principle of freedom of movement is being placed here against the background of the history of political thought, political theory, international law, and international relations, following the methodological framework of contextual history of the ‘Cambridge School’.

Keywords: Francisco de Vitoria, freedom of movement, law of nations, ius gentium, Spanish empire

Procedia PDF Downloads 337
17790 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 111
17789 Performances and Activities of Urban Communities Leader Based on Sufficiency Economy Philosophy in Dusit District, Bangkok Metropolitan

Authors: Phusit Phukamchanoad

Abstract:

The research studies the behaviors based on sufficiency economy philosophy at individual and community levels as well as the satisfaction of the urban community leaders by collecting data with purposive sampling technique. For in-depth interviews with 26 urban community leaders, the result shows that the urban community leaders have good knowledge and understanding about sufficiency economy philosophy. Especially in terms of money spending, they must consider the need for living and be economical. The activities in the community or society should not take advantage of the others as well as colleagues. At present, most of the urban community leaders live in a sufficient way. They often spend time with public service, but many families are dealing with debt. Many communities have some political conflict and high family allowances because of living in the urban communities with rapid social and economic changes. However, there are many communities that leaders have applied their wisdom in development for their people by gathering and grouping the professionals to form activities such as making chili sauce, textile organization, making artificial flowers worshipping the sanctity. The most prominent group is the foot massage business in Wat Pracha Rabue Tham. This professional group is supported continuously by the government. One of the factors in terms of satisfaction used for evaluating community leaders is the customary administration in brotherly, interdependent way rather than using the absolute power or controlling power, but using the roles of leader to perform the activities with their people intently, determinedly and having a public mind for people.

Keywords: performance and activities, sufficiency economy, urban communities leader, Dusit district

Procedia PDF Downloads 336
17788 The Investigation of Effectiveness of Different Concentrations of the Mycotoxin Detoxification Agent Added to Broiler Feed, in the Presence of T-2 Toxin, on Performance, Organ Mass and the Residues T-2 Toxin and His Metabolites in the Broiler Tissues

Authors: Jelena Nedeljković Trailović, Marko Vasiljević, Jog Raj, Hunor Farkaš, Branko Petrujkić, Stamen Radulović, Gorana Popvić

Abstract:

The experiment was performed on a total of 99 one-day-old broilers of Cob 500 provenance, which were divided into IX equal groups. Broilers of the E-I group were fed 0.25 mg T-2 toxin/kg feed, E-II and E-III groups 0.25 mg T-2 toxin/kg feed with the addition of 1 kg/t and 3 kg/t of the mycotoxin detoxification agent MDA, respectively. The E-IV group received 1 mg of T-2 toxin/kg of feed, and the broilers of E-V and E-VI groups received 1 mg of T-2 toxin/kg of feed with the addition of 1 kg/t and 3 kg/t of the MDA detoxification preparation, respectively. The E-VII group received commercial feed without toxins and additives, the E-VIII and E-IX groups received feed with 1kg/t and 3kg/t of the MDA detoxification preparation. The trial lasted 42 days. Observing the results obtained on the 42nd day of the experiment, we can conclude that the change in the absolute mass of the spleen occurred in the broilers of the E-IV group (1.66±0.14)g, which was statistically significantly lower compared to the broilers of the E-V and E-VI groups (2.58±0.15 and 2.68±0.23)g. Heart mass was significantly statistically lower in broilers of group E-IV (9.1±0.38)g compared to broilers of group E-V and E-VI (12.23±0.5 and 11.43±0.51)g. It can be concluded that the broilers that received 1 kg/t and 3 kg/t of the detoxification preparation had an absolute mass of organs within physiological limits. Broilers of the E-IV group achieved the lowest BM during the experiment (on the 42nd day of the experiment 1879±52.73)g, they were significantly statistically lower than the BW of broilers of all experimental groups. This trend is observed from the beginning to the end of the experiment. The protective effect of the detoxification preparation can be seen in broilers of the E-V group, that had a significantly statistically higher BM on the 42nd day of the experiment (2225±58.81)g compared to broilers of group E-IV. Broilers of E-VIII group (2452±46.71) g, which received commercial feed with the addition of 1 kg/t MDA preparation, had the highest BMI at the end of the experiment. At the end of the trial on the 42nd day, blood samples were collected from broilers of the experimental groups that received T-2 toxin and MR detoxification preparations in different concentrations. Also, liver and breast musculature samples were collected for testing for the presence and content of T-2 toxin, HT-2 toxin, T-2 tetraol and T-2 triol. Due to very rapid elimination from the blood, no remains of T-2 toxin and its metabolites were detected in the blood of broilers of groups E-I to E-VI. In the breast muscles, T-2 toxin residues below LoQ < 0.2 (μg/kg) were detected in all groups that received T-2 toxin in food, the highest value was recorded in the E-IV group (0.122 μg/kg and the lowest in E -VI group 0.096 μg/kg). No T-2 toxin residues were detected in the liver. Remains of HT-2 were detected in the breast muscles and livers of broilers from E-IV, E-V and E-VI groups, LoQ < 1 (μg/kg); for the breast muscles: 0.054, 0.044 and 0.041 μg/kg, and for the liver: 0.473, 0.231 and 0.185 μg/kg. Summing up all the results, a partial protective effect of the detoxification preparation, added to food in the amount of 1kg/t, can be seen.

Keywords: T-2 toxin, bloiler, MDA, mycotoxuns

Procedia PDF Downloads 38
17787 One Nature under God, and Divisible: Augustine’s “Duality of Man” Applied to the Creation Stories of Genesis

Authors: Elizabeth Latham

Abstract:

The notion that women were created as innately inferior to men has yet to be expelled completely from the theological system of humankind. This question and the biblical exegesis it requires are of paramount importance to feminist philosophy—after all, the study can bear little fruit if we cannot even agree on equality within the theological roots of humanity. Augustine’s “Duality of Man” gives new context to the two creation stories in Genesis, texts especially relevant given the billions of people worldwide that ascribe to them as philosophical realities. Each creation story describes the origin of human beings and is matched with one of Augustine’s two orders of mankind. The first story describes the absolute origin of the human soul and is paired with Augustine’s notion of the “spiritual order” of a human being: divine and eternal, fulfilling the biblical idea that human beings were created in the image and likeness of God. The second creation story, in contrast, depicts those aspects of humanity that distinguish and separate us from God: doubt, fear, and sin. It also introduces gender as a concept for the first time in the Bible. This story is better matched with Augustine’s idea of the “natural order” of humanity, that by which he believes women, in fact, are inferior. In the synthesis of the two sources, one can see that the natural order and any inferiority that it implies are incidental and not intended in our creation. Gender inequality is introduced with and belongs in the category of human imperfection and to cite the Bible as encouraging it constitutes a gross misunderstanding of scripture. This is easy to see when we divide human nature into “spiritual” and “natural” and look carefully at where scripture falls.

Keywords: augustine, bible, duality of man, feminism, genesis

Procedia PDF Downloads 113