Search results for: pulmonary function tests
1756 Risk Factors and Regional Difference in the Prevalence of Fecal Carriage Third-Generation Cephalosporin-Resistant E. Coli in Taiwan
Authors: Wan-Ling Jiang, Hsin Chi, Jia-Lu Cheng, Ming-Fang Cheng
Abstract:
Background: Investigating the risk factors for the fecal carriage of third-generation cephalosporin-resistant E.coli could contribute to further disease prevention. Previous research on third-generation cephalosporin-resistant prevalence in children in different regions of Taiwan is limited. This project aims to explore the risk factors and regional differences in the prevalence of third-generation cephalosporin-resistant and other antibiotic-resistant E. coli in the northern, southern, and eastern regions of Taiwan. Methods: We collected data from children aged 0 to 18 from community or outpatient clinics from July 2022 to May 2023 in southern, northern, and eastern Taiwan. The questionnaire was designed to survey the characteristics of participants and possible risk factors, such as clinical information, household environment, drinking water, and food habits. After collecting fecal samples and isolating stool culture with E.coli, antibiotic sensitivity tests and MLST typing were performed. Questionnaires were used to analyze the risk factors of third-generation cephalosporin-resistant E. coli in the three different regions of Taiwan. Results: In the total 246 stool samples, third-generation cephalosporin-resistant E.coli accounted for 37.4% (97/246) of all isolates. Among the three different regions of Taiwan, the highest prevalence of fecal carriage with third-generation cephalosporin-resistant E.coli was observed in southern Taiwan (42.7%), followed by northern Taiwan (35.5%) and eastern Taiwan (28.4%). Multi-drug resistant E. coli had prevalence rates of 51.9%, 66.3%, and 37.1% in the northern, southern, and eastern regions, respectively. MLST typing revealed that ST131 was the most prevalent type (11.8%). The prevalence of ST131 in northern, southern, and eastern Taiwan was 10.1%, 12.3%, and 13.2%, respectively. Risk factors analysis identified lower paternal education, overweight status, and non-vegetarian diet as statistical significance risk factors for third-generation cephalosporin-resistant E.coli. Conclusion: The fecal carriage rates of antibiotic-resistant E. coli among Taiwanese children were on the rise. This study found regional disparities in the prevalence of third-generation cephalosporin-resistant and multi-drug-resistant E. coli, with southern Taiwan having the highest prevalence. Lower paternal education, overweight, and non-vegetarian diet were the potential risk factors of third-generation cephalosporin-resistant E. coli in this study.Keywords: Escherichia coli, fecal carriage, antimicrobial resistance, risk factors, prevalence
Procedia PDF Downloads 671755 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers
Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker
Abstract:
The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic
Procedia PDF Downloads 3201754 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls
Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin
Abstract:
The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis
Procedia PDF Downloads 2151753 A Study of Anthropometric Correlation between Upper and Lower Limb Dimensions in Sudanese Population
Authors: Altayeb Abdalla Ahmed
Abstract:
Skeletal phenotype is a product of a balanced interaction between genetics and environmental factors throughout different life stages. Therefore, interlimb proportions are variable between populations. Although interlimb proportion indices have been used in anthropology in assessing the influence of various environmental factors on limbs, an extensive literature review revealed that there is a paucity of published research assessing interlimb part correlations and possibility of reconstruction. Hence, this study aims to assess the relationships between upper and lower limb parts and develop regression formulae to reconstruct the parts from one another. The left upper arm length, ulnar length, wrist breadth, hand length, hand breadth, tibial length, bimalleolar breadth, foot length, and foot breadth of 376 right-handed subjects, comprising 187 males and 189 females (aged 25-35 years), were measured. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then sex-specific simple and multiple linear regression models were used to estimate upper limb parts from lower limb parts and vice-versa. The results of this study indicated significant sexual dimorphism for all variables. The results indicated a significant correlation between the upper and lower limbs parts (p < 0.01). Linear and multiple (stepwise) regression equations were developed to reconstruct the limb parts in the presence of a single or multiple dimension(s) from the other limb. Multiple stepwise regression equations generated better reconstructions than simple equations. These results are significant in forensics as it can aid in identification of multiple isolated limb parts particularly during mass disasters and criminal dismemberment. Although a DNA analysis is the most reliable tool for identification, its usage has multiple limitations in undeveloped countries, e.g., cost, facility availability, and trained personnel. Furthermore, it has important implication in plastic and orthopedic reconstructive surgeries. This study is the only reported study assessing the correlation and prediction capabilities between many of the upper and lower dimensions. The present study demonstrates a significant correlation between the interlimb parts in both sexes, which indicates a possibility to reconstruction using regression equations.Keywords: anthropometry, correlation, limb, Sudanese
Procedia PDF Downloads 2951752 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 941751 Parametrical Analysis of Stain Removal Performance of a Washing Machine: A Case Study of Sebum
Authors: Ozcan B., Koca B., Tuzcuoglu E., Cavusoglu S., Efe A., Bayraktar S.
Abstract:
A washing machine is mainly used for removing any types of dirt and stains and also eliminating malodorous substances from textile surfaces. Stains originate from various sources from the human body to environmental contamination. Therefore, there are various methods for removing them. They are roughly classified into four different groups: oily (greasy) stains, particulate stains, enzymatic stains and bleachable (oxidizable) stains. Oily stains on clothes surfaces are a common result of being in contact with organic substances of the human body (e.g. perspiration, skin shedding and sebum) or by being exposed to an oily environmental pollutant (e.g. oily foods). Studies showed that human sebum is major component of oily soil found on the garments, and if it is aged under the several environmental conditions, it can generate obstacle yellow stains on the textile surface. In this study, a parametric study was carried out to investigate the key factors affecting the cleaning performance (specifically sebum removal performance) of a washing machine. These parameters are mechanical agitation percentage of tumble, consumed water and total washing period. A full factorial design of the experiment is used to capture all the possible parametric interactions using Minitab 2021 statistical program. Tests are carried out with commercial liquid detergent and 2 different types of sebum-soiled cotton and cotton + polyester fabrics. Parametric results revealed that for both test samples, increasing the washing time and the mechanical agitation could lead to a much better removal result of sebum. However, for each sample, the water amount had different outcomes. Increasing the water amount decreases the performance of cotton + polyester fabrics, while it is favorable for cotton fabric. Besides this, it was also discovered that the type of textile can greatly affect the sebum removal performance. Results showed that cotton + polyester fabrics are much easier to clean compared to cotton fabricKeywords: laundry, washing machine, low-temperature washing, cold wash, washing efficiency index, sustainability, cleaning performance, stain removal, oily soil, sebum, yellowing
Procedia PDF Downloads 1431750 Cooperative Robot Application in a Never Explored or an Abandoned Sub-Surface Mine
Authors: Michael K. O. Ayomoh, Oyindamola A. Omotuyi
Abstract:
Autonomous mobile robots deployed to explore or operate in a never explored or an abandoned sub-surface mine requires extreme effectiveness in coordination and communication. In a bid to transmit information from the depth of the mine to the external surface in real-time and amidst diverse physical, chemical and virtual impediments, the concept of unified cooperative robots is seen to be a proficient approach. This paper presents an effective [human → robot → task] coordination framework for effective exploration of an abandoned underground mine. The problem addressed in this research is basically the development of a globalized optimization model premised on time series differentiation and geometrical configurations for effective positioning of the two classes of robots in the cooperation namely the outermost stationary master (OSM) robots and the innermost dynamic task (IDT) robots for effective bi-directional signal transmission. In addition, the synchronization of a vision system and wireless communication system for both categories of robots, fiber optics system for the OSM robots in cases of highly sloppy or vertical mine channels and an autonomous battery recharging capability for the IDT robots further enhanced the proposed concept. The OSM robots are the master robots which are positioned at strategic locations starting from the mine open surface down to its base using a fiber-optic cable or a wireless communication medium all subject to the identified mine geometrical configuration. The OSM robots are usually stationary and function by coordinating the transmission of signals from the IDT robots at the base of the mine to the surface and in a reverse order based on human decisions at the surface control station. The proposed scheme also presents an optimized number of robots required to form the cooperation in a bid to reduce overall operational cost and system complexity.Keywords: sub-surface mine, wireless communication, outermost stationary master robots, inner-most dynamic robots, fiber optic
Procedia PDF Downloads 2131749 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming
Authors: Rui Li, Min Wen, Kim Bang Salling
Abstract:
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance
Procedia PDF Downloads 4431748 Effect of Different Thermomechanical Cycles on Microstructure of AISI 4140 Steel
Authors: L.L. Costa, A. M. G. Brito, S. Khan, L. Schaeffer
Abstract:
Microstructure resulting from the forging process is studied as a function of variables such as temperature, deformation, austenite grain size and cooling rate. The purpose of this work is to study the thermomechanical behavior of DIN 42CrMo4 (AISI 4140) steel maintained at the temperatures of 900°, 1000°, 1100° and 1200°C for the austenization times of 22, 66 and 200 minutes each and subsequently forged. These samples were quenched in water in order to study the austenite grain and to investigate the microstructure instead of quenching the annealed samples after forging they were cooled down naturally in the air. The morphologies and properties of the materials such as hardness; prepared by these two different routes have been compared. In addition to the forging experiments, the numerical simulation using the finite element model (FEM), microhardness profiles and metallography images have been presented. Forging force vs position curves has been compared with metallographic results for each annealing condition. The microstructural phenomena resulting from the hot conformation proved that longer austenization time and higher temperature decrease the forging force in the curves. The complete recrystallization phenomenon (static, dynamic and meta dynamic) was observed at the highest temperature and longest time i.e., the samples austenized for 200 minutes at 1200ºC. However, higher hardness of the quenched samples was obtained when the temperature was 900ºC for 66 minutes. The phases observed in naturally cooled samples were exclusively ferrite and perlite, but the continuous cooling diagram indicates the presence of austenite and bainite. The morphology of the phases of naturally cooled samples has shown that the phase arrangement and the previous austenitic grain size are the reasons to high hardness in obtained samples when temperature were 900ºC and 1100ºC austenization times of 22 and 66 minutes, respectively.Keywords: austenization time, thermomechanical effects, forging process, steel AISI 4140
Procedia PDF Downloads 1441747 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 3601746 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL
Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson
Abstract:
The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.Keywords: PCR, optimisation, microfluidics, COMSOL
Procedia PDF Downloads 1611745 Comparative Effect of Self-Myofascial Release as a Warm-Up Exercise on Functional Fitness of Young Adults
Authors: Gopal Chandra Saha, Sumanta Daw
Abstract:
Warm-up is an essential component for optimizing performance in various sports before a physical fitness training session. This study investigated the immediate comparative effect of Self-Myofascial Release through vibration rolling (VR), non-vibration rolling (NVR), and static stretching as a part of a warm-up treatment on the functional fitness of young adults. Functional fitness is a classification of training that prepares the body for real-life movements and activities. For the present study 20male physical education students were selected as subjects. The age of the subjects was ranged from 20-25 years. The functional fitness variables undertaken in the present study were flexibility, muscle strength, agility, static and dynamic balance of the lower extremity. Each of the three warm-up protocol was administered on consecutive days, i.e. 24 hr time gap and all tests were administered in the morning. The mean and SD were used as descriptive statistics. The significance of statistical differences among the groups was measured by applying ‘F’-test, and to find out the exact location of difference, Post Hoc Test (Least Significant Difference) was applied. It was found from the study that only flexibility showed significant difference among three types of warm-up exercise. The observed result depicted that VR has more impact on myofascial release in flexibility in comparison with NVR and stretching as a part of warm-up exercise as ‘p’ value was less than 0.05. In the present study, within the three means of warm-up exercises, vibration roller showed better mean difference in terms of NVR, and static stretching exercise on functional fitness of young physical education practitioners, although the results were found insignificant in case of muscle strength, agility, static and dynamic balance of the lower extremity. These findings suggest that sports professionals and coaches may take VR into account for designing more efficient and effective pre-performance routine for long term to improve exercise performances. VR has high potential to interpret into an on-field practical application means.Keywords: self-myofascial release, functional fitness, foam roller, physical education
Procedia PDF Downloads 1331744 Quantification of Global Cerebrovascular Reactivity in the Principal Feeding Arteries of the Human Brain
Authors: Ravinder Kaur
Abstract:
Introduction Global cerebrovascular reactivity (CVR) mapping is a promising clinical assessment for stress-testing the brain using physiological challenges, such as CO₂, to elicit changes in perfusion. It enables real-time assessment of cerebrovascular integrity and health. Conventional imaging approaches solely use steady-state parameters, like cerebral blood flow (CBF), to evaluate the integrity of the resting parenchyma and can erroneously show a healthy brain at rest, despite the underlying pathogenesis in the presence of cerebrovascular disease. Conversely, coupling CO₂ inhalation with phase-contrast MRI neuroimaging interrogates the capacity of the vasculature to respond to changes under stress. It shows promise in providing prognostic value as a novel health marker to measure neurovascular function in disease and to detect early brain vasculature dysfunction. Objective This exploratory study was established to:(a) quantify the CBF response to CO₂ in hypocapnia and hypercapnia,(b) evaluate disparities in CVR between internal carotid (ICA) and vertebral artery (VA), and (c) assess sex-specific variation in CVR. Methodology Phase-contrast MRI was employed to measure the cerebrovascular reactivity to CO₂ (±10 mmHg). The respiratory interventions were presented using the prospectively end-tidal targeting RespirActTM Gen3 system. Post-processing and statistical analysis were conducted. Results In 9 young, healthy subjects, the CBF increased from hypocapnia to hypercapnia in all vessels (4.21±0.76 to 7.20±1.83 mL/sec in ICA, 1.36±0.55 to 2.33±1.31 mL/sec in VA, p < 0.05). The CVR was quantitatively higher in ICA than VA (slope of linear regression: 0.23 vs. 0.07 mL/sec/mmHg, p < 0.05). No statistically significant effect was observed in CVR between male and female (0.25 vs 0.20 mL/sec/mmHg in ICA, 0.09 vs 0.11 mL/sec/mmHg in VA, p > 0.05). Conclusions The principal finding in this investigation validated the modulation of CBF by CO₂. Moreover, it has indicated that regional heterogeneity in hemodynamic response exists in the brain. This study provides scope to standardize the quantification of CVR prior to its clinical translation.Keywords: cerebrovascular disease, neuroimaging, phase contrast MRI, cerebrovascular reactivity, carbon dioxide
Procedia PDF Downloads 1481743 Traumatic Chiasmal Syndrome Following Traumatic Brain Injury
Authors: Jiping Cai, Ningzhi Wangyang, Jun Shao
Abstract:
Traumatic brain injury (TBI) is one of the major causes of morbidity and mortality that leads to structural and functional damage in several parts of the brain, such as cranial nerves, optic nerve tract or other circuitry involved in vision and occipital lobe, depending on its location and severity. As a result, the function associated with vision processing and perception are significantly affected and cause blurred vision, double vision, decreased peripheral vision and blindness. Here two cases complaining of monocular vision loss (actually temporal hemianopia) due to traumatic chiasmal syndrome after frontal head injury were reported, and were compared the findings with individual case reports published in the literature. Reported cases of traumatic chiasmal syndrome appear to share some common features, such as injury to the frontal bone and fracture of the anterior skull base. The degree of bitemporal hemianopia and visual loss acuity have a variable presentation and was not necessarily related to the severity of the craniocerebral trauma. Chiasmal injury may occur even in the absence bony chip impingement. Isolated bitemporal hemianopia is rare and clinical improvement usually may not occur. Mechanisms of damage to the optic chiasm after trauma include direct tearing, contusion haemorrhage and contusion necrosis, and secondary mechanisms such as cell death, inflammation, edema, neurogenesis impairment and axonal damage associated with TBI. Beside visual field test, MRI evaluation of optic pathways seems to the strong objective evidence to demonstrate the impairment of the integrity of visual systems following TBI. Therefore, traumatic chiasmal syndrome should be considered as a differential diagnosis by both neurosurgeons and ophthalmologists in patients presenting with visual impairment, especially bitemporal hemianopia after head injury causing frontal and anterior skull base fracture.Keywords: bitemporal hemianopia, brain injury, optic chiasma, traumatic chiasmal syndrome.
Procedia PDF Downloads 791742 Cognitive Performance and Physiological Stress during an Expedition in Antarctica
Authors: Andrée-Anne Parent, Alain-Steve Comtois
Abstract:
The Antarctica environment can be a great challenge for human exploration. Explorers need to be focused on the task and require the physical abilities to succeed and survive in complete autonomy in this hostile environment. The aim of this study was to observe cognitive performance and physiological stress with a biomarker (cortisol) and hand grip strength during an expedition in Antarctica. A total of 6 explorers were in complete autonomous exploration on the Forbidden Plateau in Antarctica to reach unknown summits during a 30 day period. The Stroop Test, a simple reaction time, and mood scale (PANAS) tests were performed every week during the expedition. Saliva samples were taken before sailing to Antarctica, the first day on the continent, after the mission on the continent and on the boat return trip. Furthermore, hair samples were taken before and after the expedition. The results were analyzed with SPSS using ANOVA repeated measures. The Stroop and mood scale results are presented in the following order: 1) before sailing to Antarctica, 2) the first day on the continent, 3) after the mission on the continent and 4) on the boat return trip. No significant difference was observed with the Stroop (759±166 ms, 850±114 ms, 772±179 ms and 833±105 ms, respectively) and the PANAS (39.5 ±5.7, 40.5±5, 41.8±6.9, 37.3±5.8 positive emotions, and 17.5±2.3, 18.2±5, 18.3±8.6, 15.8±5.4 negative emotions, respectively) (p>0.05). However, there appears to be an improvement at the end of the second week. Furthermore, the simple reaction time was significantly lower at the end of the second week, a moment where important decisions were taken about the mission, vs the week before (416±39 ms vs 459.8±39 ms respectively; p=0.030). Furthermore, the saliva cortisol was not significantly different (p>0.05) possibly due to important variations and seemed to reach a peak on the first day on the continent. However, the cortisol from the hair pre and post expedition increased significantly (2.4±0.5 pg/mg pre-expedition and 16.7±9.2 pg/mg post-expedition, p=0.013) showing important stress during the expedition. Moreover, no significant difference was observed on the grip strength except between after the mission on the continent and after the boat return trip (91.5±21 kg vs 85±19 kg, p=0.20). In conclusion, the cognitive performance does not seem to be affected during the expedition. Furthermore, it seems to increase for specific important events where the crew seemed to focus on the present task. The physiological stress does not seem to change significantly at specific moments, however, a global pre-post mission measure can be important and for this reason, for long-term missions, a pre-expedition baseline measure is important for crewmembers.Keywords: Antarctica, cognitive performance, expedition, physiological adaptation, reaction time
Procedia PDF Downloads 2431741 Psychological Factors of Readiness of Defectologists to Professional Development: On the Example of Choosing an Educational Environment
Authors: Inna V. Krotova
Abstract:
The study pays special attention to the definition of the psychological potential of a specialist-defectologist, which determines his desire to increase the level of his or her professional competence. The group included participants of the educational environment – an additional professional program 'Technologies of psychological and pedagogical assistance for children with complex developmental disabilities' implemented by the department of defectology and clinical psychology of the KFU jointly with the Support Fund for the Deafblind people 'Co-Unity'. The purpose of our study was to identify the psychological aspects of the readiness of the specialist-defectologist to his or her professional development. The study assessed the indicators of psychological preparedness, and its four components were taken into account: motivational, cognitive, emotional and volitional. We used valid and standardized tests during the study. As a result of the factor analysis of data received (from Extraction Method: Principal Component Analysis, Rotation Method: Varimax with Kaiser Normalization, Rotation converged in 12 iterations), there were identified three factors with maximum factor load from 24 indices, and their correlation coefficients with other indicators were taken into account at the level of reliability p ≤ 0.001 and p ≤ 0.01. Thus the system making factor was determined – it’s a 'motivation to achieve success'; it formed a correlation galaxy with two other factors: 'general internality' and 'internality in the field of achievements', as well as with such psychological indicators as 'internality in the field of family relations', 'internality in the field of interpersonal relations 'and 'low self-control-high self-control' (the names of the scales used is the same as names in the analysis methods. In conclusion of the article, we present some proposals to take into account the psychological model of readiness of specialists-defectologists for their professional development, to stimulate the growth of their professional competence. The study has practical value for all providers of special education and organizations that have their own specialists-defectologists, teachers-defectologists, teachers for correctional and ergotherapeutic activities, specialists working in the field of correctional-pedagogical activity (speech therapists) to people with special needs who need true professional support.Keywords: psychological readiness, defectologist, professional development, psychological factors, special education, professional competence, innovative educational environment
Procedia PDF Downloads 1751740 Exploring Methods for Urbanization of 'Village in City' in China: A Case Study of Hangzhou
Abstract:
After the economic reform in 1978, the urbanization in China has grown fast. It urged cities to expand in an unprecedented high speed. Villages around were annexed unprepared, and it turned out to be a new type of community called 'village in city.' Two things happened here. First, the locals gave up farming and turned to secondary industry and tertiary industry, as a result of losing their land. Secondly, attracted by the high income in cities and low rent here, plenty of migrants came into the community. This area is important to a city in rapid growth for providing a transitional zone. But thanks to its passivity and low development, 'village in city' has caused lots of trouble to the city. Densities of population and construction are both high, while facilities are severely inadequate. Unplanned and illegal structures are built, which creates a complex mixed-function area and leads to a bad residential area. Besides, the locals have a strong property right consciousness for the land. It holds back the transformation and development of the community. Although the land capitalization can bring significant benefits, it’s inappropriate to make a great financial compensation to the locals, and considering the large population of city migrants, it’s important to explore the relationship among the 'village in city,' city immigrants and the city itself. Taking the example of Hangzhou, this paper analyzed the developing process, functions spatial distribution, industrial structure and current traffic system of 'village in city.' Above the research on the community, this paper put forward a common method to make urban planning through the following ways: adding city functions, building civil facilities, re-planning functions spatial distribution, changing the constitution of local industry and planning new traffic system. Under this plan, 'village in city' finally can be absorbed into cities and make its own contribution to the urbanization.Keywords: China, city immigrant, urbanization, village in city
Procedia PDF Downloads 2171739 Systems Intelligence in Management (High Performing Organizations and People Score High in Systems Intelligence)
Authors: Raimo P. Hämäläinen, Juha Törmänen, Esa Saarinen
Abstract:
Systems thinking has been acknowledged as an important approach in the strategy and management literature ever since the seminal works of Ackhoff in the 1970´s and Senge in the 1990´s. The early literature was very much focused on structures and organizational dynamics. Understanding systems is important but making improvements also needs ways to understand human behavior in systems. Peter Senge´s book The Fifth Discipline gave the inspiration to the development of the concept of Systems Intelligence. The concept integrates the concepts of personal mastery and systems thinking. SI refers to intelligent behavior in the context of complex systems involving interaction and feedback. It is a competence related to the skills needed in strategy and the environment of modern industrial engineering and management where people skills and systems are in an increasingly important role. The eight factors of Systems Intelligence have been identified from extensive surveys and the factors relate to perceiving, attitude, thinking and acting. The personal self-evaluation test developed consists of 32 items which can also be applied in a peer evaluation mode. The concept and test extend to organizations too. One can talk about organizational systems intelligence. This paper reports the results of an extensive survey based on peer evaluation. The results show that systems intelligence correlates positively with professional performance. People in a managerial role score higher in SI than others. Age improves the SI score but there is no gender difference. Top organizations score higher in all SI factors than lower ranked ones. The SI-tests can also be used as leadership and management development tools helping self-reflection and learning. Finding ways of enhancing learning organizational development is important. Today gamification is a new promising approach. The items in the SI test have been used to develop an interactive card game following the Topaasia game approach. It is an easy way of engaging people in a process which both helps participants see and approach problems in their organization. It also helps individuals in identifying challenges in their own behavior and in improving in their SI.Keywords: gamification, management competence, organizational learning, systems thinking
Procedia PDF Downloads 961738 Seismic Response of Structure Using a Three Degree of Freedom Shake Table
Authors: Ketan N. Bajad, Manisha V. Waghmare
Abstract:
Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed
Procedia PDF Downloads 1401737 A Non-Invasive Neonatal Jaundice Screening Device Measuring Bilirubin on Eyes
Authors: Li Shihao, Dieter Trau
Abstract:
Bilirubin is a yellow substance that is made when the body breaks down old red blood cells. High levels of bilirubin can cause jaundice, a condition that makes the newborn's skin and the white part of the eyes look yellow. Jaundice is a serial-killer in developing countries in Southeast Asia such as Myanmar and most parts of Africa where jaundice screening is largely unavailable. Worldwide, 60% of newborns experience infant jaundice. One in ten will require therapy to prevent serious complications and lifelong neurologic sequelae. Limitations of current solutions: - Blood test: Blood tests are painful may largely unavailable in poor areas of developing countries, and also can be costly and unsafe due to the insufficient investment and lack of access to health care systems. - Transcutaneous jaundice-meter: 1) can only provide reliable results to caucasian newborns, due to skin pigmentations since current technologies measure bilirubin by the color of the skin. Basically, the darker the skin is, the harder to measure, 2) current jaundice meters are not affordable for most underdeveloped areas in Africa like Kenya and Togo, 3) fat tissue under the skin also influences the accuracy, which will give overestimated results, 4) current jaundice meters are not reliable after treatment (phototherapy) because bilirubin levels underneath the skin will be reduced first, while overall levels may be quite high. Thus, there is an urgent need for a low-cost non-invasive device, which can be effective not only for caucasian babies but also Asian and African newborns, to save lives at the most vulnerable time and prevent any complications like brain damage. Instead of measuring bilirubin on skin, we proposed a new method to do the measurement on the sclera, which can avoid the difference of skin pigmentations and ethnicities, due to the necessity for the sclera to be white regardless of racial background. This is a novel approach for measuring bilirubin by an optical method of light reflection off the white part of the eye. Moreover, the device is connected to a smart device, which can provide a user-friendly interface and the ability to record the clinical data continuously A disposable eye cap will be provided avoiding contamination and fixing the distance to the eye.Keywords: Jaundice, bilirubin, non-invasive, sclera
Procedia PDF Downloads 2361736 Microfiber Release During Laundry Under Different Rinsing Parameters
Authors: Fulya Asena Uluç, Ehsan Tuzcuoğlu, Songül Bayraktar, Burak Koca, Alper Gürarslan
Abstract:
Microplastics are contaminants that are widely distributed in the environment with a detrimental ecological effect. Besides this, recent research has proved the existence of microplastics in human blood and organs. Microplastics in the environment can be divided into two main categories: primary and secondary microplastics. Primary microplastics are plastics that are released into the environment as microscopic particles. On the other hand, secondary microplastics are the smaller particles that are shed as a result of the consumption of synthetic materials in textile products as well as other products. Textiles are the main source of microplastic contamination in aquatic ecosystems. Laundry of synthetic textiles (34.8%) accounts for an average annual discharge of 3.2 million tons of primary microplastics into the environment. Recently, microfiber shedding from laundry research has gained traction. However, no comprehensive study was conducted from the standpoint of rinsing parameters during laundry to analyze microfiber shedding. The purpose of the present study is to quantify microfiber shedding from fabric under different rinsing conditions and determine the effective rinsing parameters on microfiber release in a laundry environment. In this regard, a parametric study is carried out to investigate the key factors affecting the microfiber release from a front-load washing machine. These parameters are the amount of water used during the rinsing step and the spinning speed at the end of the washing cycle. Minitab statistical program is used to create a design of the experiment (DOE) and analyze the experimental results. Tests are repeated twice and besides the controlled parameters, other washing parameters are kept constant in the washing algorithm. At the end of each cycle, released microfibers are collected via a custom-made filtration system and weighted with precision balance. The results showed that by increasing the water amount during the rinsing step, the amount of microplastic released from the washing machine increased drastically. Also, the parametric study revealed that increasing the spinning speed results in an increase in the microfiber release from textiles.Keywords: front load, laundry, microfiber, microfiber release, microfiber shedding, microplastic, pollution, rinsing parameters, sustainability, washing parameters, washing machine
Procedia PDF Downloads 971735 poly(N-Isopropylacrylamide)-Polyvinyl Alcohol Semi-Interpenetrating Network Hydrogel for Wound Dressing
Authors: Zi-Yan Liao, Shan-Yu Zhang, Ya-Xian Lin, Ya-Lun Lee, Shih-Chuan Huang, Hong-Ru Lin
Abstract:
Traditional wound dressings, such as gauze, bandages, etc., are easy to adhere to the tissue fluid exuded from the wound, causing secondary damage to the wound during removal. This study takes this as the idea to develop a hydrogel dressing, to explore that the dressing will not cause secondary damage to the wound when it is torn off, and at the same time, create an environment conducive to wound healing. First, the temperature-sensitive material N-isopropylacrylamide (NIPAAm) was used as the substrate. Due to its low mechanical properties, the hydrogel would break due to pulling during human activities. Polyvinyl alcohol (PVA) interpenetrates into it to enhance the mechanical properties, and a semi-interpenetration (semi-IPN) composed of poly(N-isopropylacrylamide) (PNIPAAm) and polyvinyl alcohol (PVA) was prepared by free radical polymerization. PNIPAAm was cross-linked with N,N'-methylenebisacrylamide (NMBA) in an ice bath in the presence of linear PVA, and tetramethylhexamethylenediamine (TEMED) was added as a promoter to speed up the gel formation. The polymerization stage was carried out at 16°C for 17 hours and washed with distilled water for three days after gel formation, and the water was changed several times in the middle to complete the preparation of semi-IPN hydrogel. Finally, various tests were used to analyze the effects of different ratios of PNIPAAm and PVA on semi-IPN hydrogels. In the swelling test, it was found that the maximum swelling ratio can reach about 50% under the environment of 21°C, and the higher the ratio of PVA, the more water can be absorbed. The saturated moisture content test results show that when more PVA is added, the higher saturated water content. The water vapor transmission rate test results show that the value of the semi-IPN hydrogel is about 57 g/m²/24hr, which is not much related to the proportion of PVA. It is found in the LCST test compared with the PNIPAAm hydrogel; the semi-IPN hydrogel possesses the same critical solution temperature (30-35°C). The semi-IPN hydrogel prepared in this study has a good effect on temperature response and has the characteristics of thermal sensitivity. It is expected that after improvement, it can be used in the treatment of surface wounds, replacing the traditional dressing shortcoming.Keywords: hydrogel, N-isopropylacrylamide, polyvinyl alcohol, hydrogel wound dressing, semi-interpenetrating polymer network
Procedia PDF Downloads 801734 Production of High Purity Cellulose Products from Sawdust Waste Material
Authors: Simiksha Balkissoon, Jerome Andrew, Bruce Sithole
Abstract:
Approximately half of the wood processed in the Forestry, Timber, Pulp and Paper (FTPP) sector is accumulated as waste. The concept of a “green economy” encourages industries to employ revolutionary, transformative technologies to eliminate waste generation by exploring the development of new value chains. The transition towards an almost paperless world driven by the rise of digital media has resulted in a decline in traditional paper markets, prompting the FTTP sector to reposition itself and expand its product offerings by unlocking the potential of value-adding opportunities from renewable resources such as wood to generate revenue and mitigate its environmental impact. The production of valuable products from wood waste such as sawdust has been extensively explored in recent years. Wood components such as lignin, cellulose and hemicelluloses, which can be extracted selectively by chemical processing, are suitable candidates for producing numerous high-value products. In this study, a novel approach to produce high-value cellulose products, such as dissolving wood pulp (DWP), from sawdust was developed. DWP is a high purity cellulose product used in several applications such as pharmaceutical, textile, food, paint and coatings industries. The proposed approach demonstrates the potential to eliminate several complex processing stages, such as pulping and bleaching, which are associated with traditional commercial processes to produce high purity cellulose products such as DWP, making it less chemically energy and water-intensive. The developed process followed the path of experimentally designed lab tests evaluating typical processing conditions such as residence time, chemical concentrations, liquid-to-solid ratios and temperature, followed by the application of suitable purification steps. Characterization of the product from the initial stage was conducted using commercially available DWP grades as reference materials. The chemical characteristics of the products thus far have shown similar properties to commercial products, making the proposed process a promising and viable option for the production of DWP from sawdust.Keywords: biomass, cellulose, chemical treatment, dissolving wood pulp
Procedia PDF Downloads 1861733 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System
Authors: June-Jei Kuo, Yi-Chuan Hsieh
Abstract:
Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library
Procedia PDF Downloads 1031732 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners
Authors: Michael McMahon
Abstract:
The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.Keywords: multimedia learning, e-learning, design for learning, ICT
Procedia PDF Downloads 1031731 Ecosystem Engineering Strengthens Bottom-Up and Weakens Top-Down Effects via Trait-Mediated Indirect Interactions
Authors: Zhiwei Zhong, Xiaofei Li, Deli Wang
Abstract:
Ecosystem engineering is a powerful force shaping community structure and ecosystem function. Yet, very little is known about the mechanisms by which engineers affect vital ecosystem processes like trophic interactions. Here, we examine the potential for a herbivore ecosystem engineer, domestic sheep, to affect trophic interactions between the web-building spider Argiope bruennichi, its grasshopper prey Euchorthippus spp., and the grasshoppers’ host plant Leymus chinensis. By integrating small- and large-scale field experiments, we demonstrate that: 1) moderate sheep grazing changed the structure of plant communities by suppressing strongly interacting forbs within a grassland matrix; 2) this change in plant community structure drove interaction modifications between the grasshoppers and their grass host plants and between grasshoppers and their spider predators, and 3) these interaction modifications were entirely mediated by plasticity in grasshopper behavior. Overall, ecosystem engineering by sheep grazing strengthened bottom-up effects and weakened top-down effects via trait-mediated interactions, resulting in a nearly two-fold increase in grasshopper densities. Interestingly, the grasshopper behavioral shifts which reduced spider per capita predation rates in the microcosms did not translate to reduced spider predation rates at the larger system scale because increased grasshopper densities offset behavioral effects at larger scales. Our findings demonstrate that 1) ecosystem engineering can strongly alter trophic interactions, 2) such effects can be driven by cryptic trait-mediated interactions, and 3) the relative importance of trait- versus density effects as measured by microcosm experiments may not reflect the importance of these processes at realistic ecological scales due to scale-dependent interactions.Keywords: bottom-up effects, ecosystem engineering, trait-mediated indirect effects, top-down effects
Procedia PDF Downloads 3551730 Is Electricity Consumption Stationary in Turkey?
Authors: Eyup Dogan
Abstract:
The number of research articles analyzing the integration properties of energy variables has rapidly increased in the energy literature for about a decade. The stochastic behaviors of energy variables are worth knowing due to several reasons. For instance, national policies to conserve or promote energy consumption, which should be taken as shocks to energy consumption, will have transitory effects in energy consumption if energy consumption is found to be stationary in one country. Furthermore, it is also important to know the order of integration to employ an appropriate econometric model. Despite being an important subject for applied energy (economics) and having a huge volume of studies, several known limitations still exist with the existing literature. For example, many of the studies use aggregate energy consumption and national level data. In addition, a huge part of the literature is either multi-country studies or solely focusing on the U.S. This is the first study in the literature that considers a form of energy consumption by sectors at sub-national level. This research study aims at investigating unit root properties of electricity consumption for 12 regions of Turkey by four sectors in addition to total electricity consumption for the purpose of filling the mentioned limits in the literature. In this regard, we analyze stationarity properties of 60 cases . Because the use of multiple unit root tests make the results robust and consistent, we apply Dickey-Fuller unit root test based on Generalized Least Squares regression (DFGLS), Phillips-Perron unit root test (PP) and Zivot-Andrews unit root test with one endogenous structural break (ZA). The main finding of this study is that electricity consumption is trend stationary in 7 cases according to DFGLS and PP, whereas it is stationary process in 12 cases when we take into account the structural change by applying ZA. Thus, shocks to electricity consumption have transitory effects in those cases; namely, agriculture in region 1, region 4 and region 7, industrial in region 5, region 8, region 9, region 10 and region 11, business in region 4, region 7 and region 9, total electricity consumption in region 11. Regarding policy implications, policies to decrease or stimulate the use of electricity have a long-run impact on electricity consumption in 80% of cases in Turkey given that 48 cases are non-stationary process. On the other hand, the past behavior of electricity consumption can be used to predict the future behavior of that in 12 cases only.Keywords: unit root, electricity consumption, sectoral data, subnational data
Procedia PDF Downloads 4101729 The Subcellular Localisation of EhRRP6 and Its Involvement in Pre-Ribosomal RNA Processing in Growth-Stressed Entamoeba histolytica
Authors: S. S. Singh, A. Bhattacharya, S. Bhattacharya
Abstract:
The eukaryotic exosome complex plays a pivotal role in RNA biogenesis, maturation, surveillance and differential expression of various RNAs in response to varying environmental signals. The exosome is composed of evolutionary conserved nine core subunits and the associated exonucleases Rrp6 and Rrp44. Rrp6p is crucial for the processing of rRNAs, other non-coding RNAs, regulation of polyA tail length and termination of transcription. Rrp6p, a 3’-5’ exonuclease is required for degradation of 5’-external transcribed spacer (ETS) released from the rRNA precursors during the early steps of pre-rRNA processing. In the parasitic protist Entamoeba histolytica in response to growth stress, there occurs the accumulation of unprocessed pre-rRNA and 5’ ETS sub fragment. To understand the processes leading to this accumulation, we looked for Rrp6 and the exosome subunits in E. histolytica, by in silico approaches. Of the nine core exosomal subunits, seven had high percentage of sequence similarity with the yeast and human. The EhRrp6 homolog contained exoribonuclease and HRDC domains like yeast but its N- terminus lacked the PMC2NT domain. EhRrp6 complemented the temperature sensitive phenotype of yeast rrp6Δ cells suggesting conservation of biological activity. We showed 3’-5’ exoribonuclease activity of EhRrp6p with in vitro-synthesized appropriate RNAs substrates. Like the yeast enzyme, EhRrp6p degraded unstructured RNA, but could degrade the stem-loops slowly. Furthermore, immunolocalization revealed that EhRrp6 was nuclear-localized in normal cells but was diminished from nucleus during serum starvation, which could explain the accumulation of 5’ETS during stress. Our study shows functional conservation of EhRrp6p in E.histolytica, an early-branching eukaryote, and will help to understand the evolution of exosomal components and their regulatory function.Keywords: entamoeba histolytica, exosome complex, rRNA processing, Rrp6
Procedia PDF Downloads 2011728 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 3091727 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance
Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu
Abstract:
Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.Keywords: artificial intelligence, facial recognition, natural language processing, internet of things
Procedia PDF Downloads 355