Search results for: hand function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8376

Search results for: hand function

6516 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems

Authors: Alexander Norbach

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.

Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver

Procedia PDF Downloads 127
6515 Realization and Characterizations of Conducting Ceramics Based on ZnO Doped by TiO₂, Al₂O₃ and MgO

Authors: Qianying Sun, Abdelhadi Kassiba, Guorong Li

Abstract:

ZnO with wurtzite structure is a well-known semiconducting oxide (SCO), being applied in thermoelectric devices, varistors, gas sensors, transparent electrodes, solar cells, liquid crystal displays, piezoelectric and electro-optical devices. Intrinsically, ZnO is weakly n-type SCO due to native defects (Znⱼ, Vₒ). However, the substitutional doping by metallic elements as (Al, Ti) gives rise to a high n-type conductivity ensured by donor centers. Under CO+N₂ sintering atmosphere, Schottky barriers of ZnO ceramics will be suppressed by lowering the concentration of acceptors at grain boundaries and then inducing a large increase in the Hall mobility, thereby increasing the conductivity. The presented work concerns ZnO based ceramics, which are fabricated with doping by TiO₂ (0.50mol%), Al₂O₃ (0.25mol%) and MgO (1.00mol%) and sintering in different atmospheres (Air (A), N₂ (N), CO+N₂(C)). We obtained uniform, dense ceramics with ZnO as the main phase and Zn₂TiO₄ spinel as a secondary and minor phase. An important increase of the conductivity was shown for the samples A, N, and C which were sintered under different atmospheres. The highest conductivity (σ = 1.52×10⁵ S·m⁻¹) was obtained under the reducing atmosphere (CO). The role of doping was investigated with the aim to identify the local environment and valence states of the doping elements. Thus, Electron paramagnetic spectroscopy (EPR) determines the concentration of defects and the effects of charge carriers in ZnO ceramics as a function of the sintering atmospheres. The relation between conductivity and defects concentration shows the opposite behavior between these parameters suggesting that defects act as traps for charge carriers. For Al ions, nuclear magnetic resonance (NMR) technique was used to identify the involved local coordination of these ions. Beyond the six and forth coordinated Al, an additional NMR signature of ZnO based TCO requires analysis taking into account the grain boundaries and the conductivity through the Knight shift effects. From the thermal evolution of the conductivity as a function of the sintering atmosphere, we succeed in defining the conditions to realize ZnO based TCO ceramics with an important thermal coefficient of resistance (TCR) which is promising for electrical safety of devices.

Keywords: ceramics, conductivity, defects, TCO, ZnO

Procedia PDF Downloads 187
6514 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 344
6513 Determination of Cadmium , Lead, Nickel, and Zinc in Some Green Tea Samples Collected from Libyan Markets

Authors: Jamal A. Mayouf, Hashim Salih Al Bayati

Abstract:

Green tea is one of the most common drinks in all cities of Libyan. Heavy metal contents such as cadmium (Cd), lead (Pb), nickel (Ni) and zinc (Zn) were determined in four green tea samples collected from Libyan market and their tea infusions by using atomic emission spectrophotometry after acid digestion. The results obtained indicate that the concentrations of Cd, Pb, Ni, and Zn in tea infusions samples ranged from 0.07-0.12, 0.19-0.28, 0.09-0.15, 0.18-0.43 mg/l after boiling for 5 min., 0.06-0.08, 0.18-0.23, 0.08-0.14, 0.17-0.27 mg/l after boiling for 10 min., 0.07-0.11, 0.18-0.24, 0.08-0.14, 0.21-0.34 mg/l after boiling for 15 min. respectively. On the other hand, the concentrations of the same element mentioned above obtained in tea leaves ranged from 6.0-18.0, 36.0-42.0, 16.0-20.0, 44.0-132.0 mg/kg respectively. The concentrations of Cd, Pb, Ni and Zn in tea leaves samples were higher than Prevention of Food Adulteration (PFA) limit and World Health Organization(WHO) permissible limit.

Keywords: tea, infusion, metals, Libya

Procedia PDF Downloads 405
6512 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles

Authors: Paulo Sérgio Ribeiro de Araújo Bogas

Abstract:

Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.

Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing

Procedia PDF Downloads 79
6511 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant

Authors: Elenice Maria Schons Silva, Andre Carlos Silva

Abstract:

The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.

Keywords: collectors, depressants, flotation, mineral processing

Procedia PDF Downloads 143
6510 Temporal Progression of Episodic Memory as Function of Encoding Condition and Age: Further Investigation of Action Memory in School-Aged Children

Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf

Abstract:

Studies of adults' episodic memory have found that enacted encoding not only improve recall performance but also retrieve faster during the recall period. The current study focused on exploring the temporal progression of different encoding conditions in younger and older school children. 204 students from two age group of 8 and 14 participated in this study. During the study phase, we studied action encoding in two forms; participants performed the phrases by themselves (SPT), and observed the performance of the experimenter (EPT), which were compared with verbal encoding; participants listened to verbal action phrases (VT). At test phase, we used immediate and delayed free recall tests. We observed significant differences in memory performance as function of age group, and encoding conditions in both immediate and delayed free recall tests. Moreover, temporal progression of recall was faster in older children when compared with younger ones. The interaction of age-group and encoding condition was only significant in delayed recall displaying that younger children performed better in EPT whereas older children outperformed in SPT. It was proposed that enactment effect in form of SPT enhances item-specific processing, whereas EPT improves relational information processing and this differential processes are responsible for the results achieved in younger and older children. The role of memory strategies and information processing methods in younger and older children were considered in this study. Moreover, the temporal progression of recall was faster in action encoding in the form of SPT and EPT compared with verbal encoding in both immediate and delayed free recall and size of enactment effect was constantly increased throughout the recall period. The results of the present study provide further evidence that the action memory is explained with an emphasis on the notion of information processing and strategic views. These results also reveal the temporal progression of recall as a new dimension of episodic memory in children.

Keywords: action memory, enactment effect, episodic memory, school-aged children, temporal progression

Procedia PDF Downloads 269
6509 Comparative Comparison (Cost-Benefit Analysis) of the Costs Caused by the Earthquake and Costs of Retrofitting Buildings in Iran

Authors: Iman Shabanzadeh

Abstract:

Earthquake is known as one of the most frequent natural hazards in Iran. Therefore, policy making to improve the strengthening of structures is one of the requirements of the approach to prevent and reduce the risk of the destructive effects of earthquakes. In order to choose the optimal policy in the face of earthquakes, this article tries to examine the cost of financial damages caused by earthquakes in the building sector and compare it with the costs of retrofitting. In this study, the results of adopting the scenario of "action after the earthquake" and the policy scenario of "strengthening structures before the earthquake" have been collected, calculated and finally analyzed by putting them together. Methodologically, data received from governorates and building retrofitting engineering companies have been used. The scope of the study is earthquakes occurred in the geographical area of Iran, and among them, eight earthquakes have been specifically studied: Miane, Ahar and Haris, Qator, Momor, Khorasan, Damghan and Shahroud, Gohran, Hormozgan and Ezgole. The main basis of the calculations is the data obtained from retrofitting companies regarding the cost per square meter of building retrofitting and the data of the governorate regarding the power of earthquake destruction, the realized costs for the reconstruction and construction of residential units. The estimated costs have been converted to the value of 2021 using the time value of money method to enable comparison and aggregation. The cost-benefit comparison of the two policies of action after the earthquake and retrofitting before the earthquake in the eight earthquakes investigated shows that the country has suffered five thousand billion Tomans of losses due to the lack of retrofitting of buildings against earthquakes. Based on the data of the Budget Law's of Iran, this figure was approximately twice the budget of the Ministry of Roads and Urban Development and five times the budget of the Islamic Revolution Housing Foundation in 2021. The results show that the policy of retrofitting structures before an earthquake is significantly more optimal than the competing scenario. The comparison of the two policy scenarios examined in this study shows that the policy of retrofitting buildings before an earthquake, on the one hand, prevents huge losses, and on the other hand, by increasing the number of earthquake-resistant houses, it reduces the amount of earthquake destruction. In addition to other positive effects of retrofitting, such as the reduction of mortality due to earthquake resistance of buildings and the reduction of other economic and social effects caused by earthquakes. These are things that can prove the cost-effectiveness of the policy scenario of "strengthening structures before earthquakes" in Iran.

Keywords: disaster economy, earthquake economy, cost-benefit analysis, resilience

Procedia PDF Downloads 53
6508 Effects of Pulsed Electromagnetic and Static Magnetic Fields on Musculoskeletal Low Back Pain: A Systematic Review Approach

Authors: Mohammad Javaherian, Siamak Bashardoust Tajali, Monavvar Hadizadeh

Abstract:

Objective: This systematic review study was conducted to evaluate the effects of Pulsed Electromagnetic (PEMF) and Static Magnetic Fields (SMG) on pain relief and functional improvement in patients with musculoskeletal Low Back Pain (LBP). Methods: Seven electronic databases were searched by two researchers independently to identify the published Randomized Controlled Trials (RCTs) on the efficacy of pulsed electromagnetic, static magnetic, and therapeutic nuclear magnetic fields. The identified databases for systematic search were Ovid Medline®, Ovid Cochrane RCTs and Reviews, PubMed, Web of Science, Cochrane Library, CINAHL, and EMBASE from 1968 to February 2016. The relevant keywords were selected by Mesh. After initial search and finding relevant manuscripts, all references in selected studies were searched to identify second hand possible manuscripts. The published RCTs in English would be included to the study if they reported changes on pain and/or functional disability following application of magnetic fields on chronic musculoskeletal low back pain. All studies with surgical patients, patients with pelvic pain, and combination of other treatment techniques such as acupuncture or diathermy were excluded. The identified studies were critically appraised and the data were extracted independently by two raters (M.J and S.B.T). Probable disagreements were resolved through discussion between raters. Results: In total, 1505 abstracts were found following the initial electronic search. The abstracts were reviewed to identify potentially relevant manuscripts. Seventeen possibly appropriate studies were retrieved in full-text of which 48 were excluded after reviewing their full-texts. Ten selected articles were categorized into three subgroups: PEMF (6 articles), SMF (3 articles), and therapeutic nuclear magnetic fields (tNMF) (1 article). Since one study evaluated tNMF, we had to exclude it. In the PEMF group, one study of acute LBP did not show significant positive results and the majority of the other five studies on Chronic Low Back Pain (CLBP) indicated its efficacy on pain relief and functional improvement, but one study with the lowest sessions (6 sessions during 2 weeks) did not report a significant difference between treatment and control groups. In the SMF subgroup, two articles reported near significant pain reduction without any functional improvement although more studies are needed. Conclusion: The PEMFs with a strength of 5 to 150 G or 0.1 to 0.3 G and a frequency of 5 to 64 Hz or sweep 7 to 7KHz can be considered as an effective modality in pain relief and functional improvement in patients with chronic low back pain, but there is not enough evidence to confirm their effectiveness in acute low back pain. To achieve the appropriate effectiveness, it is suggested to perform this treatment modality 20 minutes per day for at least 9 sessions. SMFs have not been reported to be substantially effective in decreasing pain or improving the function in chronic low back pain. More studies are necessary to achieve more reliable results.

Keywords: pulsed electromagnetic field, static magnetic field, magnetotherapy, low back pain

Procedia PDF Downloads 201
6507 A Quantitative Study on the “Unbalanced Phenomenon” of Mixed-Use Development in the Central Area of Nanjing Inner City Based on the Meta-Dimensional Model

Authors: Yang Chen, Lili Fu

Abstract:

Promoting urban regeneration in existing areas has been elevated to a national strategy in China. In this context, because of the multidimensional sustainable effect through the intensive use of land, mixed-use development has become an important objective for high-quality urban regeneration in the inner city. However, in the long period of time since China's reform and opening up, the "unbalanced phenomenon" of mixed-use development in China's inner cities has been very serious. On the one hand, the excessive focus on certain individual spaces has led to an increase in the level of mixed-use development in some areas, substantially ahead of others, resulting in a growing gap between different parts of the inner city; On the other hand, the excessive focus on a one-dimensional element of the spatial organization of mixed-use development, such as the enhancement of functional mix or spatial capacity, has led to a lagging phenomenon or neglect in the construction of other dimensional elements, such as pedestrian permeability, green environmental quality, social inclusion, etc. This phenomenon is particularly evident in the central area of the inner city, and it clearly runs counter to the need for sustainable development in China's new era. Therefore, a rational qualitative and quantitative analysis of the "unbalanced phenomenon" will help to identify the problem and provide a basis for the formulation of relevant optimization plans in the future. This paper builds a dynamic evaluation method of mixed-use development based on a meta-dimensional model and then uses spatial evolution analysis and spatial consistency analysis with ArcGIS software to reveal the "unbalanced phenomenon " in over the past 40 years of the central city area in Nanjing, a China’s typical city facing regeneration. This study result finds that, compared to the increase in functional mix and capacity, the dimensions of residential space mix, public service facility mix, pedestrian permeability, and greenness in Nanjing's city central area showed different degrees of lagging improvement, and the unbalanced development problems in each part of the city center are different, so the governance and planning plan for future mixed-use development needs to fully address these problems. The research methodology of this paper provides a tool for comprehensive dynamic identification of mixed-use development level’s change, and the results deepen the knowledge of the evolution of mixed-use development patterns in China’s inner cities and provide a reference basis for future regeneration practices.

Keywords: mixed-use development, unbalanced phenomenon, the meta-dimensional model, over the past 40 years of Nanjing, China

Procedia PDF Downloads 98
6506 Hate Speech Detection Using Deep Learning and Machine Learning Models

Authors: Nabil Shawkat, Jamil Saquer

Abstract:

Social media has accelerated our ability to engage with others and eliminated many communication barriers. On the other hand, the widespread use of social media resulted in an increase in online hate speech. This has drastic impacts on vulnerable individuals and societies. Therefore, it is critical to detect hate speech to prevent innocent users and vulnerable communities from becoming victims of hate speech. We investigate the performance of different deep learning and machine learning algorithms on three different datasets. Our results show that the BERT model gives the best performance among all the models by achieving an F1-score of 90.6% on one of the datasets and F1-scores of 89.7% and 88.2% on the other two datasets.

Keywords: hate speech, machine learning, deep learning, abusive words, social media, text classification

Procedia PDF Downloads 129
6505 Development of Fem Code for 2-D Elasticity Problems Using Quadrilateral and Triangular Elements

Authors: Muhammad Umar Kiani, Waseem Sakawat

Abstract:

This study presents the development of FEM code using Quadrilateral 4-Node (Q4) and Triangular 3-Node (T3) elements. Code is formulated using MATLAB language. Instead of using both elements in the same code, two separate codes are written. Quadrilateral element is difficult to handle directly, that is why natural coordinates (eta, ksi) are used. Due to this, Q4 code includes numerical integration (Gauss quadrature). In this case, complete numerical integration is performed using 2 points. On the other hand, T3 element can be modeled directly, by using direct stiffness approach. Axially loaded element, cantilever (special constraints) and Patch test cases were analyzed using both codes and the results were verified by using Ansys.

Keywords: FEM code, MATLAB, numerical integration, ANSYS

Procedia PDF Downloads 413
6504 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 281
6503 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 114
6502 Validation of the Female Sexual Function Index and the Female Sexual Distress Scale-Desire/Arousal/Orgasm in Chinese Women

Authors: Lan Luo, Jingjing Huang, Huafang Li

Abstract:

Introduction: Distressing low sexual desire is common in China, while the lack of reliable and valid instruments to evaluate symptoms of hypoactive sexual desire disorder (HSDD) impedes related research and clinical services. Aim: This study aimed to validate the reliability and validity of the Female Sexual Function Index (FSFI) and the Female Sexual Distress Scale-Desire/Arousal/Orgasm (FSDS-DAO) in Chinese female HSDD patients. Methods: We administered FSFI and FSDS-DAO in a convenient sample of Chinese adult women. Participants were diagnosed by a psychiatrist according to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR). Results: We had a valid analysis sample of 279 Chinese women, of which 107 were HSDD patients. The Cronbach's α of FSFI and FSDS-DAO were 0.947 and 0.956, respectively, and the intraclass correlation coefficients of which were 0.86 and 0.89, respectively (the interval was 13-15 days). The correlation coefficient between the Revised Adult Attachment Scale (RAAS) and FSFI (or FSDS-DAO) did not exceed 0.4; the area under the receiver operating characteristic (ROC) curve was 0. 83 when combined FSFI-d (the desire domain of FSFI) and FSDS-DAO to diagnose HSDD, which was significantly different from that of using these scales individually. FSFI-d of less than 2.7 (1.2-6) and FSDS-DAO of no less than 15 (0-60) (Sensitivity 65%, Specificity 83%), or FSFI-d of no more than 3.0 (1.2-6) and FSDS-DAO of no less than 14 (0-60) (Sensitivity 74%, Specificity 77%) can be used as cutoff scores in clinical research or outpatient screening. Clinical implications: FSFI (including FSFI-d) and FSDS-DAO are suitable for the screening and evaluation of Chinese female HSDD patients of childbearing age. Strengths and limitations: Strengths include a thorough validation of FSFI and FSDS-DAO and the exploration of the cutoff score combing FSFI-d and FSDS-DAO. Limitations include a small convenience sample and the requirement of being sexually active for HSDD patients. Conclusion: FSFI (including FSFI-d) and FSDS-DAO have good internal consistency, test-retest reliability, construct validity, and criterion validity in Chinese female HSDD patients of childbearing age.

Keywords: sexual desire, sexual distress, hypoactive sexual desire disorder, scale

Procedia PDF Downloads 73
6501 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 196
6500 Internal Audit Function Contributions to the External Audit

Authors: Douglas F. Prawitt, Nathan Y. Sharp, David A. Wood

Abstract:

Consistent with prior experimental and survey studies, we find that IAFs that spend more time directly assisting the external auditor is associated with lower external audit fees. Interestingly, we do not find evidence that external auditors reduce fees based on work previously performed by the IAF. We also find that the time spent assisting the external auditor has a greater negative effect on external audit fees than the time spent performing tasks upon which the auditor may rely but that are not performed as direct assistance to the external audit. Our results also show that previous proxies used to measure this relation is either not associated with or are negatively associated with our direct measures of how the IAF can contribute to the external audit and are highly positively associated with the size and the complexity of the organization. Thus, we conclude the disparate experimental and archival results may be attributable to issues surrounding the construct validity of measures used in previous archival studies and that when measures similar to those used in experimental studies are employed in archival tests, the archival results are consistent with experimental findings. Our research makes four primary contributions to the literature. First, we provide evidence that internal auditing contributes to a reduction in external audit fees. Second, we replicate and provide an explanation for why previous archival studies find that internal auditing has either no association with external audit fees or is associated with an increase in those fees: prior studies generally use proxies of internal audit contribution that do not adequately capture the intended construct. Third, our research expands on survey-based research (e.g., Oil Libya sh.co.) by separately examining the impact on the audit fee of the internal auditors’ work, indirectly assisting external auditors and internal auditors’ prior work upon which external auditors can rely. Finally, we extend prior research by using a new, independent data source to validate and extend prior studies. This data set also allows for a sample of examining the impact of internal auditing on the external audit fee and the use of a more comprehensive external audit fee model that better controls for determinants of the external audit fee.

Keywords: internal audit, contribution, external audit, function

Procedia PDF Downloads 116
6499 Surface Motion of Anisotropic Half Space Containing an Anisotropic Inclusion under SH Wave

Authors: Yuanda Ma, Zhiyong Zhang, Zailin Yang, Guanxixi Jiang

Abstract:

Anisotropy is very common in underground media, such as rock, sand, and soil. Hence, the dynamic response of anisotropy medium under elastic waves is significantly different from the isotropic one. Moreover, underground heterogeneities and structures, such as pipelines, cylinders, or tunnels, are usually made by composite materials, leading to the anisotropy of these heterogeneities and structures. Both the anisotropy of the underground medium and the heterogeneities have an effect on the surface motion of the ground. Aiming at providing theoretical references for earthquake engineering and seismology, the surface motion of anisotropic half-space with a cylindrical anisotropic inclusion embedded under the SH wave is investigated in this work. Considering the anisotropy of the underground medium, the governing equation with three elastic parameters of SH wave propagation is introduced. Then, based on the complex function method and multipolar coordinates system, the governing equation in the complex plane is obtained. With the help of a pair of transformation, the governing equation is transformed into a standard form. By means of the same methods, the governing equation of SH wave propagation in the cylindrical inclusion with another three elastic parameters is normalized as well. Subsequently, the scattering wave in the half-space and the standing wave in the inclusion is deduced. Different incident wave angle and anisotropy are considered to obtain the reflected wave. Then the unknown coefficients in scattering wave and standing wave are solved by utilizing the continuous condition at the boundary of the inclusion. Through truncating finite terms of the scattering wave and standing wave, the equation of boundary conditions can be calculated by programs. After verifying the convergence and the precision of the calculation, the validity of the calculation is verified by degrading the model of the problem as well. Some parameters which influence the surface displacement of the half-space is considered: dimensionless wave number, dimensionless depth of the inclusion, anisotropic parameters, wave number ratio, shear modulus ratio. Finally, surface displacement amplitude of the half space with different parameters is calculated and discussed.

Keywords: anisotropy, complex function method, sh wave, surface displacement amplitude

Procedia PDF Downloads 117
6498 Potential Energy Expectation Value for Lithium Excited State (1s2s3s)

Authors: Khalil H. Al-Bayati, G. Nasma, Hussein Ban H. Adel

Abstract:

The purpose of the present work is to calculate the expectation value of potential energy for different spin states (ααα ≡ βββ, αβα ≡ βαβ) and compare it with spin states (αββ, ααβ ) for lithium excited state (1s2s3s) and Li-like ions (Be+, B+2) using Hartree-Fock wave function by partitioning technique. The result of inter particle expectation value shows linear behaviour with atomic number and for each atom and ion the shows the trend ααα < ααβ < αββ < αβα.

Keywords: lithium excited state, potential energy, 1s2s3s, mathematical physics

Procedia PDF Downloads 481
6497 Schooling Competent Citizens: A Normative Analysis of Citizenship Education Policy in Europe

Authors: M. Joris, O. Agirdag

Abstract:

For over two decades, calls for citizenship education (CE) have been rising to the top of educational policy agendas in Europe. The main motive for the current treatment of CE as a key topic is a sense of crisis: social and political threats that go beyond the reach of nations and require action at the international and European level. On the one hand, this context has triggered abundant attention to the promotion of citizenship through education. On the other hand, the ubiquity of citizenship and education in policy language is paired with a self-evident manner of using the concepts: the more we call for citizenship in and through education, the less the concepts seem to be made explicit or be defined. Research and reflection on the normativity of the concepts of citizenship and CE in Europe are scarce. Departing from the idea that policies are always normative, this study, therefore, investigates the normativity of the current concepts of citizenship and education, in ’key’ European CE policy texts. The study consists of a content analysis of these texts, based on a normative framework developed around the different dimensions of citizenship as status, identity, virtues and agency. The framework also describes the purposes of education and its learning processes, content and practices, based on the assumption that good education always includes, next to qualification and socialisation, a purpose of emancipation: of helping young people become autonomous and independent subjects. The analysis shows how contemporary European citizenship is conceptualised around the dimension of competences. This focus on competences is also visible in the normative framing of education and its relationship to citizenship in the texts: CE should help young people learn how to become good citizens by acquiring a toolkit of competences, consisting of knowledge, skills, values and attitudes that can be predetermined, measured and evaluated. This ideal of citizenship-as-competence entails a focus on the educational purposes of socialisation and qualification. Current policy texts thus seem to leave out the educational purpose of emancipating young people, allowing them to take on citizenship as something to which they can determine their own relation and position. It is, however, this purpose of CE that seems increasingly important in our current context. Young people are stepping out of school and onto the streets by the thousands in Belgium and throughout Europe, protesting for more and better environmental policies. They are making use of existing modes of citizenship, exactly to indicate to policymakers how these are falling short and are claiming their right and entitlement to a future that established practices of politics are putting at risk. The importance of citizenship education might then lie, now more than ever, not in the fact that it would prepare young people for competent citizenship, but in offering them a possibility, an emancipatory experience of being able to do something new. It seems that this is what we might want to expect from the school if we want it to educate our truly future citizens.

Keywords: citizenship education, normativity, policy, purposes of education

Procedia PDF Downloads 127
6496 Dry Modifications of PCL/Chitosan/PCL Tissue Scaffolds

Authors: Ozan Ozkan, Hilal Turkoglu Sasmazel

Abstract:

Natural polymers are widely used in tissue engineering applications, because of their biocompatibility, biodegradability and solubility in the physiological medium. On the other hand, synthetic polymers are also widely utilized in tissue engineering applications, because they carry no risk of infectious diseases and do not cause immune system reaction. However, the disadvantages of both polymer types block their individual usages as tissue scaffolds efficiently. Therefore, the idea of usage of natural and synthetic polymers together as a single 3D hybrid scaffold which has the advantages of both and the disadvantages of none has been entered to the literature. On the other hand, even though these hybrid structures support the cell adhesion and/or proliferation, various surface modification techniques applied to the surfaces of them to create topographical changes on the surfaces and to obtain reactive functional groups required for the immobilization of biomolecules, especially on the surfaces of synthetic polymers in order to improve cell adhesion and proliferation. In a study presented here, to improve the surface functionality and topography of the layer by layer electrospun 3D poly-epsilon-caprolactone/chitosan/poly-epsilon-caprolactone hybrid tissue scaffolds by using atmospheric pressure plasma method, thus to improve cell adhesion and proliferation of these tissue scaffolds were aimed. The formation/creation of the functional hydroxyl and amine groups and topographical changes on the surfaces of scaffolds were realized by using two different atmospheric pressure plasma systems (nozzle type and dielectric barrier discharge (DBD) type) carried out under different gas medium (air, Ar+O2, Ar+N2). The plasma modification time and distance for the nozzle type plasma system as well as the plasma modification time and the gas flow rate for DBD type plasma system were optimized with monitoring the changes in surface hydrophilicity by using contact angle measurements. The topographical and chemical characterizations of these modified biomaterials’ surfaces were carried out with SEM and ESCA, respectively. The results showed that the atmospheric pressure plasma modifications carried out with both nozzle type plasma and DBD plasma caused topographical and functionality changes on the surfaces of the layer by layer electrospun tissue scaffolds. However, the shelf life studies indicated that the hydrophilicity introduced to the surfaces was mainly because of the functionality changes. Therefore, according to the optimized results, samples treated with nozzle type air plasma modification applied for 9 minutes from a distance of 17 cm and Ar+O2 DBD plasma modification applied for 1 minute under 70 cm3/min O2 flow rate were found to have the highest hydrophilicity compared to pristine samples.

Keywords: biomaterial, chitosan, hybrid, plasma

Procedia PDF Downloads 273
6495 Investigation of Linezolid, 127I-Linezolid and 131I-Linezolid Effects on Slime Layer of Staphylococcus with Nuclear Methods

Authors: Hasan Demiroğlu, Uğur Avcıbaşı, Serhan Sakarya, Perihan Ünak

Abstract:

Implanted devices are progressively practiced in innovative medicine to relieve pain or improve a compromised function. Implant-associated infections represent an emerging complication, caused by organisms which adhere to the implant surface and grow embedded in a protective extracellular polymeric matrix, known as a biofilm. In addition, the microorganisms within biofilms enter a stationary growth phase and become phenotypically resistant to most antimicrobials, frequently causing treatment failure. In such cases, surgical removal of the implant is often required, causing high morbidity and substantial healthcare costs. Staphylococcus aureus is the most common pathogen causing implant-associated infections. Successful treatment of these infections includes early surgical intervention and antimicrobial treatment with bactericidal drugs that also act on the surface-adhering microorganisms. Linezolid is a promising anti-microbial with ant-staphylococcal activity, used for the treatment of MRSA infections. Linezolid is a synthetic antimicrobial and member of oxazolidinoni group, with a bacteriostatic or bactericidal dose-dependent antimicrobial mechanism against gram-positive bacteria. Intensive use of antibiotics, have emerged multi-resistant organisms over the years and major problems have begun to be experienced in the treatment of infections occurred with them. While new drugs have been developed worldwide, on the other hand infections formed with microorganisms which gained resistance against these drugs were reported and the scale of the problem increases gradually. Scientific studies about the production of bacterial biofilm increased in recent years. For this purpose, we investigated the activity of Lin, Lin radiolabeled with 131I (131I-Lin) and cold iodinated Lin (127I-Lin) against clinical strains of Staphylococcus aureus DSM 4910 in biofilm. In the first stage, radio and cold labeling studies were performed. Quality-control studies of Lin and iodo (radio and cold) Lin derivatives were carried out by using TLC (Thin Layer Radiochromatography) and HPLC (High Pressure Liquid Chromatography). In this context, it was found that the binding yield was obtained to be about 86±2 % for 131I-Lin. The minimal inhibitory concentration (MIC) of Lin, 127I-Lin and 131I-Lin for Staphylococcus aureus DSM 4910 strain were found to be 1µg/mL. In time-kill studies of Lin, 127I-Lin and 131I-Lin were producing ≥ 3 log10 decreases in viable counts (cfu/ml) within 6 h at 2 and 4 fold of MIC respectively. No viable bacteria were observed within the 24 h of the experiments. Biofilm eradication of S. aureus started with 64 µg/mL of Lin, 127I-Lin and 131I-Lin, and OD630 was 0.507±0.0.092, 0.589±0.058 and 0.266±0.047, respectively. The media control of biofilm producing Staphylococcus was 1.675±0,01 (OD630). 131I and 127I did not have any effects on biofilms. Lin and 127I-Lin were found less effectively than 131I-Lin at killing cells in biofilm and biofilm eradication. Our results demonstrate that the 131I-Lin have potent anti-biofilm activity against S. aureus compare to Lin, 127I-Lin and media control. This is suggested that, 131I may have harmful effect on biofilm structure.

Keywords: iodine-131, linezolid, radiolabeling, slime layer, Staphylococcus

Procedia PDF Downloads 554
6494 Designing Intelligent Adaptive Controller for Nonlinear Pendulum Dynamical System

Authors: R. Ghasemi, M. R. Rahimi Khoygani

Abstract:

This paper proposes the designing direct adaptive neural controller to apply for a class of a nonlinear pendulum dynamic system. The radial basis function (RBF) neural adaptive controller is robust in presence of external and internal uncertainties. Both the effectiveness of the controller and robustness against disturbances are importance of this paper. The simulation results show the promising performance of the proposed controller.

Keywords: adaptive neural controller, nonlinear dynamical, neural network, RBF, driven pendulum, position control

Procedia PDF Downloads 478
6493 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material

Authors: H. M. Alfrihidi, H.A. Albarakaty

Abstract:

Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.

Keywords: flattening filter free, monte carlo, radiotherapy, surface dose

Procedia PDF Downloads 67
6492 Comparative Evaluation of Root Uptake Models for Developing Moisture Uptake Based Irrigation Schedules for Crops

Authors: Vijay Shankar

Abstract:

In the era of water scarcity, effective use of water via irrigation requires good methods for determining crop water needs. Implementation of irrigation scheduling programs requires an accurate estimate of water use by the crop. Moisture depletion from the root zone represents the consequent crop evapotranspiration (ET). A numerical model for simulating soil water depletion in the root zone has been developed by taking into consideration soil physical properties, crop and climatic parameters. The governing differential equation for unsaturated flow of water in the soil is solved numerically using the fully implicit finite difference technique. The water uptake by plants is simulated by using three different sink functions. The non-linear model predictions are in good agreement with field data and thus it is possible to schedule irrigations more effectively. The present paper describes irrigation scheduling based on moisture depletion from the different layers of the root zone, obtained using different sink functions for three cash, oil and forage crops: cotton, safflower and barley, respectively. The soil is considered at a moisture level equal to field capacity prior to planting. Two soil moisture regimes are then imposed for irrigated treatment, one wherein irrigation is applied whenever soil moisture content is reduced to 50% of available soil water; and other wherein irrigation is applied whenever soil moisture content is reduced to 75% of available soil water. For both the soil moisture regimes it has been found that the model incorporating a non-linear sink function which provides best agreement of computed root zone moisture depletion with field data, is most effective in scheduling irrigations. Simulation runs with this moisture uptake function result in saving 27.3 to 45.5% & 18.7 to 37.5%, 12.5 to 25% % &16.7 to 33.3% and 16.7 to 33.3% & 20 to 40% irrigation water for cotton, safflower and barley respectively, under 50 & 75% moisture depletion regimes over other moisture uptake functions considered in the study. Simulation developed can be used for an optimized irrigation planning for different crops, choosing a suitable soil moisture regime depending upon the irrigation water availability and crop requirements.

Keywords: irrigation water, evapotranspiration, root uptake models, water scarcity

Procedia PDF Downloads 327
6491 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 221
6490 Optimization of Cutting Parameters during Machining of Fine Grained Cemented Carbides

Authors: Josef Brychta, Jiri Kratochvil, Marek Pagac

Abstract:

The group of progressive cutting materials can include non-traditional, emerging and less-used materials that can be an efficient use of cutting their lead to a quantum leap in the field of machining. This is essentially a “superhard” materials (STM) based on polycrystalline diamond (PCD) and polycrystalline cubic boron nitride (PCBN) cutting performance ceramics and development is constantly "perfecting" fine coated cemented carbides. The latter cutting materials are broken down by two parameters, toughness and hardness. A variation of alloying elements is always possible to improve only one of each parameter. Reducing the size of the core on the other hand doing achieves "contradictory" properties, namely to increase both hardness and toughness.

Keywords: grained cutting materials difficult to machine materials, optimum utilization, mechanic, manufacturing

Procedia PDF Downloads 296
6489 MicroRNA in Bovine Corpus Luteum during Early Pregnancy

Authors: Rreze Gecaj, Corina Schanzenbach, Benedikt Kirchner, Michael Pfaffl, Bajram Berisha

Abstract:

The maintenance of corpus lutem (CL) during early pregnancy in cattle is a critical and multifarious process. A luteotrophic mechanism originating from the embryo is widely accepted as the triggering signal for the CL maintenance. In the cattle, it is the interferon-tau (IFNT) secretion form conceptus that prevents CL regression and ensures progesterone production for the establishment of pregnancy. In addition to endocrine and paracrine signals, microRNA (miRNA) can also support CL sustainability during early pregnancy. MiRNA are small non-coding nucleic acids that regulate gene expression post-transcriptionally and are shown to be involved in the modulation of CL function. However, the examination of miRNAs in corpus luteum function at the early pregnancy still remains largely uncovered. This study aims at profiling the expression of miRNA in CL during the early pregnancy in cattle by comparing it with the CL form late cycle and with the regressed CL. Corpora lutea were assigned in two different groups during the cycle (C13 group, late CL: days 13-18 and C18, regressed CL group: day >18) and during the early pregnancy (group P: 1-2 month). The estrous cycle was determined by macroscopic examination and to age the fetus crown-rump length measurement was applied. A total of 9 corpora lutea from individual animals were included in the study, three corpora lutea for each group. MiRNAs population was profiled using small RNA next-generation sequencing and biologically significant miRNAs were evaluated for their differential expression using the DESeq2-methodology. We show that 6 differentially expressed miRNAs (bta-mir-2890, -2332, -2441-3p, -148b, -1248 and -29c) are common to both comparisons, P vs C13 and P vs C18. While for each stage individually we have identified unique miRNAs differentially expressed only for the given comparison. bta-miR-23a and -769 were unique miRNAs differentially expressed in P vs C13, whereas forty-four unique miRNAs were identified as differentially expressed in P vs C18. These data confirm that miRNAs are highly abundant in luteal tissue during early pregnancy and potentially regulate the CL maintenance at this stage of fetus development.

Keywords: bovine, corpus luteum, microRNA, pregnancy, RNA-Seq

Procedia PDF Downloads 254
6488 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure

Authors: Inga Žukovaitė

Abstract:

This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).

Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model

Procedia PDF Downloads 61
6487 Correlation Study between Clinical and Radiological Findings in Knee Osteoarthritis

Authors: Nabil A. A. Mohamed, Alaa A. A. Balbaa, Khaled E. Ayad

Abstract:

Osteoarthritis (OA) of the knee is the most common form of arthritis and leads to more activity limitations (e.g., disability in walking and stair climbing) than any other disease, especially in the elderly. Recently, impaired proprioceptive accuracy of the knee has been proposed as a local factor in the onset and progression of radiographic knee OA (ROA). Purpose: To compare the clinical and radiological findings in healthy with that of knee OA. Also, to determine if there is a correlation between the clinical and radiological findings in patients with knee OA. Subjects: Fifty one patients diagnosed as unilateral or bilateral knee OA with age ranged between 35-70 years, from both gender without any previous history of knee trauma or surgery, and twenty one normal subjects with age ranged from 35 - 68 years. METHODS: peak torque/body weight (PT/BW) was recorded from knee extensors at isokinetic isometric mode at angle of 45 degree. Also, the Absolute Angular Error was recorded at 45O and 30O to measure joint position sense (JPS). They made anteroposterior (AP) plain X-rays from standing semiflexed knee position and their average score of Timed Up and Go test(TUG) and WOMAC were recorded as a measure of knee pain, stiffness and function. Comparison between the mean values of different variables in the two groups was performed using unpaired student t test. The P value less or equal to 0.05 was considered significant. Results: There were significant differences between the studied variables between the experimental and control groups except the values of AAE at 30O. Also, there were no significant correlation between the clinical findings (pain, function, muscle strength and proprioception) and the severity of arthritic changes in X-rays. CONCLUSION: From the finding of the current study we can conclude that there were a significant difference between the both groups in all studied parameters (the WOMAC, functional level, quadriceps muscle strength and the joint proprioception). Also this study did not support the dependency on radiological findings in management of knee OA as the radiological features did not necessarily indicate the level of structural damage of patients with knee OA and we should consider the clinical features in our treatment plan.

Keywords: joint position sense, peak torque, proprioception, radiological knee osteoarthritis

Procedia PDF Downloads 298