Search results for: finite difference scheme
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7776

Search results for: finite difference scheme

1206 Relationship between Causes of Carcass Condemnation and Other Welfare Indicators Collected in Three Poultry Slaughterhouses

Authors: Sara Santos, Cristina Saraiva, Sónia Saraiva

Abstract:

The objective of this study was to evaluate the welfare of reared broilers using scoring systems at the slaughterhouse. The welfare of broilers from 70 different flocks was assessed in three different slaughterhouses, regarding 373043 animals, although not in equal proportions in each slaughterhouse due to the difference in the amount of flocks slaughtered per day because of different company size. Twenty-one flocks were evaluated in slaughterhouse A (30%), thirty in slaughterhouse B (42,9%) and nineteen in slaughterhouse C (27,1%). The parameters evaluated were feather cleanness, foot pad dermatitis, hock burn, breast burn and causes of carcass condemnation. Feather cleanness was scored into three classes: 0=clean; 1=moderately dirty and 2=dirty feathers. Foot pad dermatitis, hock burn and breast ulcer were graded in three classes: 0=no lesions, 1=moderate lesions and 2=severe lesions. Causes of carcass condemnation were divided into emaciation, ascites, colour alteration and febrile state, arthritis, aerosaculitis, dermatitis, peritonitis, myositis, cellulitis, extensive trauma and technopathies as mechanical trauma, insufficient bleeding and deficient plucking. Broilers evaluated had a body weight ranging between 0,909kg and 2,588kg (median 1,522kg) and age between 25 days and 45 days (median 33 days). Rejection rate of flocks ranged between 0,1% and 10,48% (median 1,4029%) and footpad dermatitis total score between 2 and 197, resulting in 20 flocks presenting moderate lesions and 15 flocks with severe lesions. Moderate hock burn was associated with severe foot pad dermatitis and with breast burn. The associations between these lesions suggest that the development of contact dermatitis is caused by a common cause, the prolonged contact with litter of poor quality. In conclusion, contact dermatitis lesions, mostly foot pad dermatitis, feather hygiene conditions and rejection rate were the main restrictions of good welfare and considered important indicators for the follow-up on the farm conditions.

Keywords: broiler, dermatitis, welfare, slaughterhouse

Procedia PDF Downloads 110
1205 East West Discourse: An Esoteric Comparison of the Western Philosophy and the Eastern Vedanta

Authors: Chandrabati Chakraborty

Abstract:

The progressive emergence, in the course of the evolution of life, mind and personality, requires us to assume a creative Principle operating timeless Reality in the temporal. The difference between Western philosophy and that of India, concerns the origin and the purpose of the philosophical enquiry. While the former wonders at the external world, the latter is awareness of perennial suffering associated with human existence. The present world suffers from a basic form of rootlessness,reflecting many psychological, philosophical studies. Alienation,a major theme of human condition in the contemporary epoch has emerged as natural consequences of existential predicament. As Edmund Fuller also observes that individuals suffer not only from famine, ruin or even war but also from devastating inner problems, which are that of estrangement, hopelessness and utter despair. This existentialism is thus considered by Jean Wahl as the “Philosophies of existence”.The post world war scenario well analyses the chaos,annihilation,frustration and anguished estrangement. In such conditions when the West cries out , “What is there?I know first of all that I am.But who am I?.....I am sepeated.What I am seperated from I cannot name it. But I am seperated.”(Dostoevsky:The Confession), Vedantic philosophy looks upon the Pilgrim’s Progress of Humanity as being essentially one,operationg squarely within the bounds of reality, reflecting a basic human experience, outbraving indecorous dictims that have failed to give due honour to human beings,echoing for centuries the Sanskrit slokas with ultimate certitude: II Esa Atma samaha plusina samo masakena samo nagena sama ebhis tribhir lokaih.....sama nena sarvena II (The Atman (Divine Soul) is the same in the ant, the same in the gnat, the same in the elephant, the same in these three worlds....the same in the whole Universe). The present paper aims at a comparative study of cultural and philosophical expression taking into view extensive illustrations from Western Philosophers and The Vedantic,Upanishadic lores of Indian philosophy.

Keywords: existentialism, Vedanta, philosophy, absurdism

Procedia PDF Downloads 72
1204 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 255
1203 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 186
1202 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 175
1201 Preparing Curved Canals Using Mtwo and RaCe Rotary Instruments: A Comparison Study

Authors: Mimoza Canga, Vito Malagnino, Giulia Malagnino, Irene Malagnino

Abstract:

Objective: The objective of this study was to compare the effectiveness of Mtwo and RaCe rotary instruments, in cleaning and shaping root canals curvature. Material and Method: The present study was conducted on 160 simulated canals in resin blocks, with an angle curvature 15°-30°. These 160 simulated canals were divided into two groups, where each group consisted of 80 blocks. Each group was divided into two subgroups (n=40 canals each). The simulated canals subgroups were prepared with Mtwo and RaCe rotary nickel-titanium instruments. The root canals were measured at four different points of reference, starting at 13 mm from the orifice. In the first group, the canals were prepared using Mtwo rotary system (VDW, Munich, Germany). The Mtwo files used were: 10/0.04, 15/0.05, 20/0.06, and 25/0.06. These instruments entered in the full length of the canal. Each file was rotated in the canal until it reached the apical point. In the second group, the canals were prepared using RaCe instruments (La Chaux-De-Fonds, Switzerland), performing the crown down technique, using the torque electric control motor (VDWCO, Munich, Germany), with 600 RPM and 2n/cm as follow: ≠40/0.10, ≠35/0.08, ≠30/0.06, ≠25/0.04, ≠25/0.02. The data were recorded using SPSS version 23 software (Microsoft, IL, USA). Data analysis was done using ANOVA test. Results: The results obtained by using the Mtwo rotary instruments, showed that these instruments were able to clean and shape in the right-to-left motion curved canals, at different levels, without any deviation, and in perfect symmetry, with a P-value=0.000. The data showed that the greater the depth of the root canal, the greater the deviations of the RaCe rotary instruments. These deviations occurred in three levels, which are: S2(P=0.004), S3( P=0.007), S4(P=0.009). The Mtwo files can go deeper and create a greater angle in S4 level (21°-28°), compared to RaCe instruments with an angle equal to 19°-24°. Conclusion: The present study noted a clinically significant difference between Mtwo rotary instruments and RaCe rotary files used for the canal preparation and indicated that Mtwo instruments are a better choice for the curved canals.

Keywords: canal curvature, canal preparation, Mtwo, RaCe, resin blocks

Procedia PDF Downloads 101
1200 Acquisition of French (L3) Direct Object by Persian (L1) Speakers of English (L2) as EFL Learners

Authors: Ali Akbar Jabbari

Abstract:

The present study assessed the acquisition of L3 French direct objects by Persian speakers who had already learned English as their L2. The ultimate goal of this paper is to extend the current knowledge about the CLI phenomenon in the realm of third language acquisition by examining the role of Persian and English as background languages and learners’ English level of proficiency in their performance on French direct object. To fulfill this, the assumptions of three L3 hypotheses, namely L1 Transfer, L2 Status Factor, and Cumulative Enhancement Model, were examined. The research sample was comprised of 40 undergraduate students in the fields of English language and literature and translation studies at Birjand University in Iran. According to the English proficiency level of learners revealed by the Quick Oxford English Placement test, the participants were grouped as upper intermediate and lower intermediate. A grammaticality judgment and a translation test were administered to gather the required data on learners' comprehension and production of the desired structure in French. It was demonstrated that the rate of positive transfer from previously learned languages was more potent than the rate of negative transfer. A Comparison of groups' performances revealed a significant difference between upper and lower intermediate groups in positing French direct objects correctly. However, the upper intermediate group did not significantly differ from the lower intermediate group in negative transfer. It can be said that by increasing the L2 proficiency of the learners, they could use their previous linguistic knowledge more efficiently. Although further examinations are needed, the current study contributed to a better characterization of cross-linguistic influence in third language acquisition. The findings help French teachers and learners to positively exploit the prior knowledge of Persian and English and apply it in in the multilingual context of French direct object's teaching and learning process.

Keywords: Cross-Linguistic Influence, Persian, French & English Direct Object, Third Language Acquisition, Language Transfer

Procedia PDF Downloads 54
1199 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death

Procedia PDF Downloads 328
1198 Suitability Evaluation of Human Settlements Using a Global Sensitivity Analysis Method: A Case Study in of China

Authors: Feifei Wu, Pius Babuna, Xiaohua Yang

Abstract:

The suitability evaluation of human settlements over time and space is essential to track potential challenges towards suitable human settlements and provide references for policy-makers. This study established a theoretical framework of human settlements based on the nature, human, economy, society and residence subsystems. Evaluation indicators were determined with the consideration of the coupling effect among subsystems. Based on the extended Fourier amplitude sensitivity test algorithm, the global sensitivity analysis that considered the coupling effect among indicators was used to determine the weights of indicators. The human settlement suitability was evaluated at both subsystems and comprehensive system levels in 30 provinces of China between 2000 and 2016. The findings were as follows: (1) human settlements suitability index (HSSI) values increased significantly in all 30 provinces from 2000 to 2016. Among the five subsystems, the suitability index of the residence subsystem in China exhibited the fastest growinggrowth, fol-lowed by the society and economy subsystems. (2) HSSI in eastern provinces with a developed economy was higher than that in western provinces with an underdeveloped economy. In con-trast, the growing rate of HSSI in eastern provinces was significantly higher than that in western provinces. (3) The inter-provincial difference of in HSSI decreased from 2000 to 2016. For sub-systems, it decreased for the residence system, whereas it increased for the economy system. (4) The suitability of the natural subsystem has become a limiting factor for the improvement of human settlements suitability, especially in economically developed provinces such as Beijing, Shanghai, and Guangdong. The results can be helpful to support decision-making and policy for improving the quality of human settlements in a broad nature, human, economy, society and residence context.

Keywords: human settlements, suitability evaluation, extended fourier amplitude, human settlement suitability

Procedia PDF Downloads 60
1197 Effect of Organic Fertilization and Intercropping of Potato (Solanum Tuberosum) With Faba Bean (Vicia Faba) on Potato’s Yield

Authors: Laila Nassiri, Aziza Irhza, Jamal Ibijbijen, Fouad Rachidi, Ghizlane Echchgadda

Abstract:

The introduction of agroecological practices in ecosystems can contribute to meeting the challenges posed by the diversion of current agricultural production systems towards efficient production methods that are more respectful of the environment, including a reasoned use of inputs and resources. Intercropping is one of these practices that requires the production of two or more crops on the same plot and during the same growing season. Organic fertilization also can contribute to increase the yield due to the potential availability of nutrients. The objective of this work is to study the effect of intercropping and organic fertilization, which are two important practices of agroecology, on potato yield. Intercropping of potato and faba bean was carried out at the Agroecology and Environment platform (ENA, Meknes). The soil is silty-clay, the climate is warm with an average temperature of 17.1°C, and the annual average rainfall of 511mm. Four treatments were tested: Potato sole crop (T1), potato + organic fertilization (T2), Potato + faba bean (T3), Potato + faba bean + organic fertilization (T4). The results showed that there is a significant effect of the treatment on the evolution of the agronomical characters studied, especially the number of leaves and the yield. The number of stems at t0 was equal to 1 in all treatments; it began to grow after 30 days from the date of sowing with a slight increase in treatments containing organic fertilization (T2-T4), then it stabilized 60 days after sowing. In terms of the mean value of the number of leaves, a significant difference was noted between the treatments, the highest value was recorded in treatment T2. The T2 treatment showed the highest average yield, followed by the control (T1). As for the yield, treatments T2 and T1 recorded the highest number of tubers. In order to evaluate two of the practices of agroecology, this work focuses on the evaluation of the effect of intercropping and organic fertilization on the growth and yield parameters of the potato. The results obtained show that agroecological practices have a significant effect on the measured parameters.

Keywords: agroecology, intercropping, organic fertilization, potato yield

Procedia PDF Downloads 65
1196 Impact of Gamma Irradiation on Biological Activities of Artemisia herba alba from Algeria

Authors: Abir Mohamed Mohamed Ibrahim, Amina Titouche, Mohamed Hazzit

Abstract:

Phytotherapy is based on use of plant natural products holding the main sources of drugs with healing properties for the treatment of human, animal or vegetable diseases. With these aims, and to replace chemical preservatives in natural products, we are interested to use essential oils from Algerian endemic plants belonging to the Asteraceae family: Artemisia herba alba Asso, which was undergoes a hydro-distillation after its irradiation by Gamma rays at frequencies: 10, 20, and 30 KGray which gave respectively the following essential oil yields: 1.087%, 1.087%, 1.085%, compared with that of the untreated sample giving a yield of 1.27 %. Evaluation of the antioxidant activity in vitro of essential oil for A. herba alba has been assessed by two different methods: inhibition of DPPH radical and measurement of reducing power. The first method has not revealed a very big difference regardless of the dose of irradiation, the IC50 is about 4000 mg/l, the maximum of inhibition was around 49.4%, likewise, the test of reducing power awarded us a maximum reducing capacity was of 0.76%; both of results were registered by the specimen irradiated at 20 KGy, it has a more better antioxidant power than no irradiated sample but slightly. To combat Fusarium culmorum, causing the wilts and rots, we are focused on the antifungal screening of this aromatic plant. The results obtained, followed by measurements of Minimal Inhibitory Concentrations (MIC); showed promising inhibitory effect against pathogen tested. With a yield superior to l%, the essential oil has shown a remarkable efficiency on the stump, mainly for sample irradiate at 30KGray (MICs= 625 µg/ml; MICc= 1250 µg/ml) with MIC of 2%. These results demonstrate a good antifungal activity, to limit and even to stop the development of the pathogenic microorganism and also the positive effect of dose of irradiation to upgrade this capacity as well, to uphold the antioxidant capacity.

Keywords: artemisia herba alba Asso, essential oil yield, gamma ray, antioxidant activity, antifungal activity

Procedia PDF Downloads 499
1195 Radial Variation of Anatomical Characteristics in Three Native Fast-Growing Species Growing in South Kalimantan, Indonesia

Authors: Wiwin Tyas Istikowati, Futoshi Ishiguri, Haruna Aisho, Budi Sutiya, Imam Wahyudi, Kazuya Iizuka, Shinso Yokota

Abstract:

The objective of this study was to investigate the anatomical characteristics of three native fast-growing species, terap (Artocarpus elasticus Reinw. ex Blume), medang (Neolitsea latifolia (Blume) S. Moore), and balik angin (Alphitonia excelsa (Fenzel) Reissek ex Benth) growing in the secondary forest in South Kalimantan, Indonesia for evaluating the possibility of tree breeding for wood quality. Cell lengths were investigated for 5 trees in each species at several different height positions (1.0, 3.0, 5.0, 7.0, 9.0, and 11.0 m above the ground). The mean values of fiber and vessel element lengths in terap, medang, and balik angin were 1.52 and 0.44, 1.16 and 0.53, and 1.02 and 0.49 mm, respectively. Fiber length in terap and balik angin gradually increased from pith to bark, whereas it increased up to 2 cm and then became nearly constant to the bark in medang. Vessel element length was almost constant from pith to bark in terap and balik angin, while slightly increased from pith to bark in medang. Fiber length in terap has a fluctuation pattern from ground level to top of the tree. It decreased up to 3 m above the ground, increased up to 5 m, and then decreased to the top of the tree. On the other hand, vessel element length slightly increased up to 5 m above the ground, and then decreased to the top of the tree. Both fiber and vessel element lengths in medang were almost constant from ground level to top of the tree, whereas decreased from ground level to top of the tree in balik angin. Significant difference at 1% level among trees was found in both fiber and vessel element length in both radial and longitudinal directions for terap and medang. Based on obtained results, it is concluded that the wood quality in fiber and vessel element lengths of terap and medang can be improved by tree breeding programs.

Keywords: anatomical properties, fiber length, vessel elements length, fast-growing species

Procedia PDF Downloads 320
1194 Towards an Eastern Philosophy of Religion: on the Contradictory Identity of Philosophy and Religion

Authors: Carlo Cogliati

Abstract:

The study of the relationship of philosophical reason with the religious domain has been very much a concern for many of the Western philosophical and theological traditions. In this essay, I will suggest a proposal for an Eastern philosophy of religion based on Nishida’s contradictory identity of the two: philosophy soku hi (is, and yes is not) religion. This will pose a challenge to the traditional Western contents and methods of the discipline. This paper aims to serve three purposes. First, I will critically assess Charlesworth’s typology of the relation between philosophy and religion in the West: philosophy as/for/against/about/after religion. I will also engage Harrison’s call for a global philosophy of religion(s) and argue that, although it expands the scope and the range of the questions to address, it is still Western in its method. Second, I will present Nishida’s logic of absolutely contradictory self-identity as the instrument to transcend the dichotomous pair of identity and contradiction: ‘A is A’ and ‘A is not A’. I will then explain how this ‘concrete’ logic of the East, as opposed to the ‘formal’ logic of the West, exhibits at best the bilateral dynamic relation between philosophy and religion. Even as Nishida argues for the non-separability of the two, he is also aware and committed to their mutual non-reducibility. Finally, I will outline the resulting new relation between God and creatures. Nishida in his philosophy soku hi religion replaces the traditional Western dualistic concept of God with the Eastern non-dualistic understanding of God as “neither transcendent nor immanent, and at the same time both transcendent and immanent.” God is therefore a self-identity of contradiction, nowhere and yet everywhere present in the world of creatures. God as absolute being is also absolute nothingness: the world of creatures is the expression of God’s absolute self-negation. The overreaching goal of this essay is to offer an alternative to traditional Western approaches to philosophy of religion based on Nishida’s logic of absolutely contradictory self-identity, as an example of philosophical and religious counter(influence). The resulting relationship between philosophy and religion calls for a revision of traditional concepts and methods. The outcome is not to reformulate the Eastern predilection to not sharply distinguish philosophical thought from religious enlightenment rather to bring together philosophy and religion in the place of identity and difference.

Keywords: basho, Nishida Kitaro, shukyotetsugaku, soku hi, zettai mujunteki jikodoitsu no ronri

Procedia PDF Downloads 170
1193 Determinants of Cessation of Exclusive Breastfeeding in Ankesha Guagusa Woreda, Awi Zone, Northwest Ethiopia: A Cross-Sectional Study

Authors: Tebikew Yeneabat, Tefera Belachew, Muluneh Haile

Abstract:

Background: Exclusive breast-feeding (EBF) is the practice of feeding only breast milk (including expressed breast milk) during the first six months and no other liquids and solid foods except medications. The time to cessation of exclusive breast-feeding, however, is different in different countries depending on different factors. Studies showed the risk of diarrhea morbidity and mortality is higher among none exclusive breast-feeding infants, common during starting other foods. However, there is no study that evaluated the time to cessation of exclusive breast-feeding in the study area. The aim of this study was to show time to cessation of EBF and its predictors among mothers of index infants less than twelve months old. Methods: We conducted a community-based cross-sectional study from February 13 to March 3, 2012 using both quantitative and qualitative methods. This study included a total of 592 mothers of index infant using multi-stage sampling method. Data were collected by using interviewer administered structured questionnaire. Bivariate and multivariate Cox regression analyses were performed. Results: Cessation of exclusive breast-feeding occurred in 392 (69.63%) cases. Among these, 224 (57.1%) happened before six months, while 145 (37.0%) and 23 (5.9%) occurred at six months and after six months of age of the index infant respectively. The median time for infants to stay on exclusive breast-feeding was 6.36 months in rural and 5.13 months in urban, and this difference was statistically significant on a Log rank (Cox-mantel) test. Maternal and paternal occupation, place of residence, postnatal counseling on exclusive breast-feeding, mode of delivery, and birth order of the index infant were significant predictors of cessation of exclusive breast-feeding. Conclusion: Providing postnatal care counseling on EBF, routine follow-up and support of those mothers having infants stressing for working mothers can bring about implementation of national strategy on infant and young child feeding.

Keywords: exclusive breastfeeding, cessation, median duration, Ankesha Guagusa Woreda

Procedia PDF Downloads 297
1192 Optimization of Titanium Leaching Process Using Experimental Design

Authors: Arash Rafiei, Carroll Moore

Abstract:

Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.

Keywords: titanium leaching, optimization, experimental design, performance analysis

Procedia PDF Downloads 354
1191 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography

Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko

Abstract:

The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.

Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system

Procedia PDF Downloads 88
1190 Study on the Influence of Different Lengths of Tunnel High Temperature Zones on Train Aerodynamic Resistance

Authors: Chong Hu, Tiantian Wang, Zhe Li, Ourui Huang, Yichen Pan

Abstract:

When the train is running in a high geothermal tunnel, changes in the temperature field will cause disturbances in the propagation and superposition of pressure waves in the tunnel, which in turn have an effect on the aerodynamic resistance of the train. The aim of this paper is to investigate the effect of the changes in the lengths of the high-temperature zone of the tunnel on the aerodynamic resistance of the train, clarifying the evolution mechanism of aerodynamic resistance of trains in tunnels with high ground temperatures. Firstly, moving model tests of trains passing through wall-heated tunnels were conducted to verify the reliability of the numerical method in this paper. Subsequently, based on the three-dimensional unsteady compressible RANS method and the standard k-ε two-equation turbulence model, the change laws of the average aerodynamic resistance under different high-temperature zone lengths were analyzed, and the influence of frictional resistance and pressure difference resistance on total resistance at different times was discussed. The results show that as the length of the high-temperature zone LH increases, the average aerodynamic resistance of a train running in a tunnel gradually decreases; when LH = 330 m, the aerodynamic resistance can be reduced by 5.7%. At the moment of maximum resistance, the total resistance, differential pressure resistance, and friction resistance all decrease gradually with the increase of LH and then remain basically unchanged. At the moment of the minimum value of resistance, with the increase of LH, the total resistance first increases and then slowly decreases; the differential pressure resistance first increases and then remains unchanged, while the friction resistance first remains unchanged and then gradually decreases, and the ratio of the differential pressure resistance to the total resistance gradually increases with the increase of LH. The results of this paper can provide guidance for scholars who need to investigate the mechanism of aerodynamic resistance change of trains in high geothermal environments, as well as provide a new way of thinking for resistance reduction in non-high geothermal tunnels.

Keywords: high-speed trains, aerodynamic resistance, high-ground temperature, tunnel

Procedia PDF Downloads 48
1189 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 104
1188 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 162
1187 Effectiveness of Gamified Virtual Physiotherapy Patients with Shoulder Problems

Authors: A. Barratt, M. H. Granat, S. Buttress, B. Roy

Abstract:

Introduction: Physiotherapy is an essential part of the treatment of patients with shoulder problems. The focus of treatment is usually centred on addressing specific physiotherapy goals, ultimately resulting in the improvement in pain and function. This study investigates if computerised physiotherapy using gamification principles are as effective as standard physiotherapy. Methods: Physiotherapy exergames were created using a combination of commercially available hardware, the Microsoft Kinect, and bespoke software. The exergames used were validated by mapping physiotherapy goals of physiotherapy which included; strength, range of movement, control, speed, and activation of the kinetic chain. A multicenter, randomised prospective controlled trial investigated the use of exergames on patients with Shoulder Impingement Syndrome who had undergone Arthroscopic Subacromial Decompression surgery. The intervention group was provided with the automated sensor-based technology, allowing them to perform exergames and track their rehabilitation progress. The control group was treated with standard physiotherapy protocols. Outcomes from different domains were used to compare the groups. An important metric was the assessment of shoulder range of movement pre- and post-operatively. The range of movement data included abduction, forward flexion and external rotation which were measured by the software, pre-operatively, 6 weeks and 12 weeks post-operatively. Results: Both groups show significant improvement from pre-operative to 12 weeks in elevation in forward flexion and abduction planes. Results for abduction showed an improvement for the interventional group (p < 0.015) as well as the test group (p < 0.003). Forward flexion improvement was interventional group (p < 0.0201) with the control group (p < 0.004). There was however no significant difference between the groups at 12 weeks for abduction (p < 0.118067) , forward flexion (p < 0.189755) or external rotation (p < 0.346967). Conclusion: Exergames may be used as an alternative to standard physiotherapy regimes; however, further analysis is required focusing on patient engagement.

Keywords: shoulder, physiotherapy, exergames, gamification

Procedia PDF Downloads 173
1186 Electroencephalogram during Natural Reading: Theta and Alpha Rhythms as Analytical Tools for Assessing a Reader’s Cognitive State

Authors: D. Zhigulskaya, V. Anisimov, A. Pikunov, K. Babanova, S. Zuev, A. Latyshkova, K. Сhernozatonskiy, A. Revazov

Abstract:

Electrophysiology of information processing in reading is certainly a popular research topic. Natural reading, however, has been relatively poorly studied, despite having broad potential applications for learning and education. In the current study, we explore the relationship between text categories and spontaneous electroencephalogram (EEG) while reading. Thirty healthy volunteers (mean age 26,68 ± 1,84) participated in this study. 15 Russian-language texts were used as stimuli. The first text was used for practice and was excluded from the final analysis. The remaining 14 were opposite pairs of texts in one of 7 categories, the most important of which were: interesting/boring, fiction/non-fiction, free reading/reading with an instruction, reading a text/reading a pseudo text (consisting of strings of letters that formed meaningless words). Participants had to read the texts sequentially on an Apple iPad Pro. EEG was recorded from 12 electrodes simultaneously with eye movement data via ARKit Technology by Apple. EEG spectral amplitude was analyzed in Fz for theta-band (4-8 Hz) and in C3, C4, P3, and P4 for alpha-band (8-14 Hz) using the Friedman test. We found that reading an interesting text was accompanied by an increase in theta spectral amplitude in Fz compared to reading a boring text (3,87 µV ± 0,12 and 3,67 µV ± 0,11, respectively). When instructions are given for reading, we see less alpha activity than during free reading of the same text (3,34 µV ± 0,20 and 3,73 µV ± 0,28, respectively, for C4 as the most representative channel). The non-fiction text elicited less activity in the alpha band (C4: 3,60 µV ± 0,25) than the fiction text (C4: 3,66 µV ± 0,26). A significant difference in alpha spectral amplitude was also observed between the regular text (C4: 3,64 µV ± 0,29) and the pseudo text (C4: 3,38 µV ± 0,22). These results suggest that some brain activity we see on EEG is sensitive to particular features of the text. We propose that changes in theta and alpha bands during reading may serve as electrophysiological tools for assessing the reader’s cognitive state as well as his or her attitude to the text and the perceived information. These physiological markers have prospective practical value for developing technological solutions and biofeedback systems for reading in particular and for education in general.

Keywords: EEG, natural reading, reader's cognitive state, theta-rhythm, alpha-rhythm

Procedia PDF Downloads 66
1185 Lies and Pretended Fairness of Police Officers in Sharing

Authors: Eitan Elaad

Abstract:

The current study aimed to examine lying and pretended fairness by police personnel in sharing situations. Forty Israeli police officers and 40 laypeople from the community, all males, self-assessed their lie-telling ability, rated the frequency of their lies, evaluated the acceptability of lying, and indicated using rational and intuitive thinking while lying. Next, according to the ultimatum game procedure, participants were asked to share 100 points with an imagined target, either a male policeman or a male non-policeman. Participants allocated points to the target person bearing in mind that the other person must accept or reject their offer. Participants' goal was to retain as many points as possible, and to this end, they could tell the target person that fewer than 100 points were available for distribution. We defined concealment or lying as the difference between the available 100 points and the sum of points designated for sharing. Results indicated that police officers lied less to their fellow police targets than non-police targets, whereas laypeople lied less to non-police targets than imagined police targets. The ratio between the points offered to the imagined target person and the points endowed by the participant as available for sharing defined pretended fairness.Enhanced pretended fairness indicates higher motivation to display fair sharing even if the fair sharing is fictitious. Police officers presented higher pretended fairness to police targets than laypeople, whereas laypeople set off more fairness to non-police targets than police officers. We discussed the results concerning occupation solidarity and loyalty among police personnel. Specifically, police work involves uncertainty, danger and risk, coercive authority, and the use of force, which isolates the police from the community and dictates strong bonds of solidarity between police personnel. No wonder police officers shared more points (lied less) to fellow police targets than non-police targets. On the other hand, police legitimacy or the belief that the police are acting honestly in the best interest of the citizens constitutes citizens' attitudes toward the police. The relatively low number of points shared for distribution by laypeople to police targets indicates difficulties with the legitimacy of the Israeli police.

Keywords: lying, fairness, police solidarity, police legitimacy, sharing, ultimatum game

Procedia PDF Downloads 102
1184 Effect of Surfactant Level of Microemulsions and Nanoemulsions on Cell Viability

Authors: Sonal Gupta, Rakhi Bansal, Javed Ali, Reema Gabrani, Shweta Dang

Abstract:

Nanoemulsions (NEs) and microemulsions (MEs) have been an attractive tool for encapsulation of both hydrophilic and lipophillic actives. Both these systems are composed of oil phase, surfactant, co-surfactant and aqueous phase. Depending upon the application and intended use, both oil-in-water and water-in-oil emulsions can be designed. NEs are fabricated using high energy methods employing less percentage of surfactant as compared to MEs which are self assembled drug delivery systems. Owing to the nanometric size of the droplets these systems have been widely used to enhance solubility and bioavailability of natural as well as synthetic molecules. The aim of the present study is to assess the effect of % age of surfactants on cell viability of Vero cells (African Green Monkeys’ Kidney epithelial cells) via MTT assay. Green tea catechin (Polyphenon 60) loaded ME employing low energy vortexing and NE employing high energy ultrasonication were prepared using same excipients (labrasol as oil, cremophor EL as surfactant and glycerol as co-surfactant) however, the % age of oil and surfactant needed to prepare the ME was higher as compared to NE. These formulations along with their excipients (oilME=13.3%, SmixME=26.67%; oilNE=10%, SmixNE=13.52%) were added to Vero cells for 24 hrs. The tetrazolium dye, 3-(4,5-dimethylthia/ol-2-yl)-2,5-diphi-iiyltclrazolium bromide (MTT), is reduced by live cells and this reaction is used as the end point to evaluate the cytoxicity level of a test formulation. Results of MTT assay indicated that oil at different percentages exhibited almost equal cell viability (oilME ≅ oilNE) while surfactant mixture had a significant difference in the cell viability values (SmixME < SmixNE). Polyphenon 60 loaded ME and its PlaceboME showed higher toxicity as compared to Polyphenon 60 loaded NE and its PlaceboNE that can be attributed to the higher concentration of surfactants present in MEs. Another probable reason for high % cell viability of Polyphenon 60 loaded NE might be due to the effective release of Polyphenon 60 from NE formulation that helps in the sustenance of Vero cells.

Keywords: cell viability, microemulsion, MTT, nanoemulsion, surfactants, ultrasonication

Procedia PDF Downloads 411
1183 Deep Brain Stimulation and Motor Cortex Stimulation for Post-Stroke Pain: A Systematic Review and Meta-Analysis

Authors: Siddarth Kannan

Abstract:

Objectives: Deep Brain Stimulation (DBS) and Motor Cortex stimulation (MCS) are innovative interventions in order to treat various neuropathic pain disorders such as post-stroke pain. While each treatment has a varying degree of success in managing pain, comparative analysis has not yet been performed, and the success rates of these techniques using validated, objective pain scores have not been synthesised. The aim of this study was to compare the effect of pain relief offered by MCS and DBS on patients with post-stroke pain and to assess if either of these procedures offered better results. Methods: A systematic review and meta-analysis were conducted in accordance with PRISMA guidelines (PROSPEROID CRD42021277542). Three databases were searched, and articles published from 2000 to June 2023 were included (last search date 25 June 2023). Meta-analysis was performed using random effects models. We evaluated the performance of DBS or MCS by assessing studies that reported pain relief using the Visual Analogue Scale (VAS). Data analysis of descriptive statistics was performed using SPSS (Version 27; IBM; Armonk; NY; USA). R statistics (Rstudio Version 4.0.1) was used to perform meta-analysis. Results: Of the 478 articles identified, 27 were included in the analysis (232 patients- 117 DBS & 115 MCS). The pooled number of patients who improved after DBS was 0.68 (95% CI, 0.57-0.77, I2=36%). The pooled number of patients who improved after MCS was 0.72 (95% CI, 0.62-0.80, I2=59%). Further sensitivity analysis was done to include only studies with a minimum of 5 patients in order to assess if there was any impact on the overall results. Nine studies each for DBS and MCS met these criteria. There seemed to be no significant difference in results. Conclusions: The use of surgical interventions such as DBS and MCS is an upcoming field for the treatment of post-stroke pain, with limited studies exploring and comparing these two techniques. While our study shows that MCS might be a slightly better treatment option, further research would need to be done in order to determine the appropriate surgical intervention for post-stroke pain.

Keywords: post-stroke pain, deep brain stimulation, motor cortex stimulation, pain relief

Procedia PDF Downloads 111
1182 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance

Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens

Abstract:

Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.

Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium

Procedia PDF Downloads 46
1181 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Authors: Toshitaka Higashino, Naoki Wakamiya

Abstract:

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Keywords: brain activity, EEG, information processing model, model human processor

Procedia PDF Downloads 88
1180 Monitoring the Thin Film Formation of Carrageenan and PNIPAm Microgels

Authors: Selim Kara, Ertan Arda, Fahrettin Dolastir, Önder Pekcan

Abstract:

Biomaterials and thin film coatings play a fundamental role in medical, food and pharmaceutical industries. Carrageenan is a linear sulfated polysaccharide extracted from algae and seaweeds. To date, such biomaterials have been used in many smart drug delivery systems due to their biocompatibility and antimicrobial activity properties. Poly (N-isopropylacrylamide) (PNIPAm) gels and copolymers have also been used in medical applications. PNIPAm shows lower critical solution temperature (LCST) property at about 32-34 °C which is very close to the human body temperature. Below and above the LCST point, PNIPAm gels exhibit distinct phase transitions between swollen and collapsed states. A special class of gels are microgels which can react to environmental changes significantly faster than microgels due to their small sizes. Quartz crystal microbalance (QCM) measurement technique is one of the attractive techniques which has been used for monitoring the thin-film formation process. A sensitive QCM system was designed as to detect 0.1 Hz difference in resonance frequency and 10-7 change in energy dissipation values, which are the measures of the deposited mass and the film rigidity, respectively. PNIPAm microgels with the diameter around few hundred nanometers in water were produced via precipitation polymerization process. 5 MHz quartz crystals with functionalized gold surfaces were used for the deposition of the carrageenan molecules and microgels in the solutions which were slowly pumped through a flow cell. Interactions between charged carrageenan and microgel particles were monitored during the formation of the film layers, and the Sauerbrey masses of the deposited films were calculated. The critical phase transition temperatures around the LCST were detected during the heating and cooling cycles. It was shown that it is possible to monitor the interactions between PNIPAm microgels and biopolymer molecules, and it is also possible to specify the critical phase transition temperatures by using a QCM system.

Keywords: carrageenan, phase transitions, PNIPAm microgels, quartz crystal microbalance (QCM)

Procedia PDF Downloads 213
1179 Journal Bearing with Controllable Radial Clearance, Design and Analysis

Authors: Majid Rashidi, Shahrbanoo Farkhondeh Biabnavi

Abstract:

The hydrodynamic instability phenomenon in a journal bearing may occur by either a reduction in the load carried by journal bearing, by an increase in the journal speed, by change in the lubricant viscosity, or a combination of these factors. The previous research and development work done to overcome the instability issue of journal bearings, operating in hydrodynamic lubricate regime, can be categorized as follows: A) Actively controlling the bearing sleeve by using piezo actuator, b) Inclusion of strategically located and shaped internal grooves within inner surface of the bearing sleeve, c) Actively controlling the bearing sleeve using an electromagnetic actuator, d)Actively and externally pressurizing the lubricant within a journal bearing set, and e)Incorporating tilting pads within the inner surface of the bearing sleeve that assume different equilibrium angular position in response to changes in the bearing design parameter such as speed and load. This work presents an innovative design concept for a 'smart journal bearing' set to operate in a stable hydrodynamic lubrication regime, despite variations in bearing speed, load, and its lubricant viscosity. The proposed bearing design allows adjusting its radial clearance for an attempt to maintain a stable bearing operation under those conditions that may cause instability for a bearing with a fixed radial clearance. The design concept allows adjusting the radial clearance at small increments in the order of 0.00254 mm. This is achieved by axially moving two symmetric conical rigid cavities that are in close contact with the conically shaped outer shell of a sleeve bearing. The proposed work includes a 3D model of the bearing that depicts the structural interactions of the bearing components. The 3D model is employed to conduct finite element Analyses to simulate the mechanical behavior of the bearing from a structural point of view. The concept of controlling of the radial clearance, as presented in this work, is original and has not been proposed and discuss in previous research. A typical journal bearing was analyzed under a set of design parameters, namely r =1.27 cm (journal radius), c = 0.0254 mm (radial clearance), L=1.27 cm (bearing length), w = 445N (bearing load), μ = 0.028 Pascale (lubricant viscosity). A shaft speed as 3600 r.p.m was considered, and the mass supported by the bearing, m, is set to be 4.38kg. The Summerfield Number associated with the above bearing design parameters turn to be, S=0.3. These combinations resulted in stable bearing operation. Subsequently, the speed was postulated to increase from 3600 r.p.mto 7200 r.p.m; the bearing was found to be unstable under the new increased speed. In order to regain stability, the radial clearance was increased from c = 0.0254 mm to0.0358mm. The change in the radial clearance was shown to bring the bearing back to stable an operating condition.

Keywords: adjustable clearance, bearing, hydrodynamic, instability, journal

Procedia PDF Downloads 265
1178 Enhancement to Green Building Rating Systems for Industrial Facilities by Including the Assessment of Impact on the Landscape

Authors: Lia Marchi, Ernesto Antonini

Abstract:

The impact of industrial sites on people’s living environment both involves detrimental effects on the ecosystem and perceptual-aesthetic interferences with the scenery. These, in turn, affect the economic and social value of the landscape, as well as the wellbeing of workers and local communities. Given the diffusion of the phenomenon and the relevance of its effects, it emerges the need for a joint approach to assess and thus mitigate the impact of factories on the landscape –being this latest assumed as the result of the action and interaction of natural and human factors. However, the impact assessment tools suitable for the purpose are quite heterogeneous and mostly monodisciplinary. On the one hand, green building rating systems (GBRSs) are increasingly used to evaluate the performance of manufacturing sites, mainly by quantitative indicators focused on environmental issues. On the other hand, methods to detect the visual and social impact of factories on the landscape are gradually emerging in the literature, but they generally adopt only qualitative gauges. The research addresses the integration of the environmental impact assessment and the perceptual-aesthetic interferences of factories on the landscape. The GBRSs model is assumed as a reference since it is adequate to simultaneously investigate different topics which affect sustainability, returning a global score. A critical analysis of GBRSs relevant to industrial facilities has led to select the U.S. GBC LEED protocol as the most suitable to the scope. A revision of LEED v4 Building Design+Construction has then been provided by including specific indicators to measure the interferences of manufacturing sites with the perceptual-aesthetic and social aspects of the territory. To this end, a new impact category was defined, namely ‘PA - Perceptual-aesthetic aspects’, comprising eight new credits which are specifically designed to assess how much the buildings are in harmony with their surroundings: these investigate, for example the morphological and chromatic harmonization of the facility with the scenery or the site receptiveness and attractiveness. The credits weighting table was consequently revised, according to the LEED points allocation system. As all LEED credits, each new PA credit is thoroughly described in a sheet setting its aim, requirements, and the available options to gauge the interference and get a score. Lastly, each credit is related to mitigation tactics, which are drawn from a catalogue of exemplary case studies, it also developed by the research. The result is a modified LEED scheme which includes compatibility with the landscape within the sustainability assessment of the industrial sites. The whole system consists of 10 evaluation categories, which contain in total 62 credits. Lastly, a test of the tool on an Italian factory was performed, allowing the comparison of three mitigation scenarios with increasing compatibility level. The study proposes a holistic and viable approach to the environmental impact assessment of factories by a tool which integrates the multiple involved aspects within a worldwide recognized rating protocol.

Keywords: environmental impact, GBRS, landscape, LEED, sustainable factory

Procedia PDF Downloads 100
1177 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System

Authors: Min Hae Song, Jooyong Park

Abstract:

Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.

Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format

Procedia PDF Downloads 154