Search results for: laboratory tests
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6169

Search results for: laboratory tests

799 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels

Authors: Shin Woo Kim, Eui Ju Lee

Abstract:

The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.

Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)

Procedia PDF Downloads 199
798 Ischemic Stroke Detection in Computed Tomography Examinations

Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina

Abstract:

Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.

Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means

Procedia PDF Downloads 345
797 Effects of Test Environment on the Sliding Wear Behaviour of Cast Iron, Zinc-Aluminium Alloy and Its Composite

Authors: Mohammad M. Khan, Gajendra Dixit

Abstract:

Partially lubricated sliding wear behaviour of a zinc-based alloy reinforced with 10wt% SiC particles has been studied as a function of applied load and solid lubricant particle size and has been compared with that of matrix alloy and conventionally used grey cast iron. The wear tests were conducted at the sliding velocities of 2.1m/sec in various partial lubricated conditions using pin on disc machine as per ASTM G-99-05. Base oil (SAE 20W-40) or mixture of the base oil with 5wt% graphite of particle sizes (7-10 µm) and (100 µm) were used for creating lubricated conditions. The matrix alloy revealed primary dendrites of a and eutectoid a + h and Î phases in the Inter dendritic regions. Similar microstructure has been depicted by the composite with an additional presence of the dispersoid SiC particles. In the case of cast iron, flakes of graphite were observed in the matrix; the latter comprised of (majority of) pearlite and (limited quantity of) ferrite. Results show a large improvement in wear resistance of the zinc-based alloy after reinforcement with SiC particles. The cast iron shows intermediate response between the matrix alloy and composite. The solid lubrication improved the wear resistance and friction behaviour of both the reinforced and base alloy. Moreover, minimum wear rate is obtained in oil+ 5wt % graphite (7-10 µm) lubricated environment for the matrix alloy and composite while for cast iron addition of solid lubricant increases the wear rate and minimum wear rate is obtained in case of oil lubricated environment. The cast iron experienced higher frictional heating than the matrix alloy and composite in all the cases especially at higher load condition. As far as friction coefficient is concerned, a mixed trend of behaviour was noted. The wear rate and frictional heating increased with load while friction coefficient was affected in an opposite manner. Test duration influenced the frictional heating and friction coefficient of the samples in a mixed manner.

Keywords: solid lubricant, sliding wear, grey cast iron, zinc based metal matrix composites

Procedia PDF Downloads 292
796 Risk Factors and Regional Difference in the Prevalence of Fecal Carriage Third-Generation Cephalosporin-Resistant E. Coli in Taiwan

Authors: Wan-Ling Jiang, Hsin Chi, Jia-Lu Cheng, Ming-Fang Cheng

Abstract:

Background: Investigating the risk factors for the fecal carriage of third-generation cephalosporin-resistant E.coli could contribute to further disease prevention. Previous research on third-generation cephalosporin-resistant prevalence in children in different regions of Taiwan is limited. This project aims to explore the risk factors and regional differences in the prevalence of third-generation cephalosporin-resistant and other antibiotic-resistant E. coli in the northern, southern, and eastern regions of Taiwan. Methods: We collected data from children aged 0 to 18 from community or outpatient clinics from July 2022 to May 2023 in southern, northern, and eastern Taiwan. The questionnaire was designed to survey the characteristics of participants and possible risk factors, such as clinical information, household environment, drinking water, and food habits. After collecting fecal samples and isolating stool culture with E.coli, antibiotic sensitivity tests and MLST typing were performed. Questionnaires were used to analyze the risk factors of third-generation cephalosporin-resistant E. coli in the three different regions of Taiwan. Results: In the total 246 stool samples, third-generation cephalosporin-resistant E.coli accounted for 37.4% (97/246) of all isolates. Among the three different regions of Taiwan, the highest prevalence of fecal carriage with third-generation cephalosporin-resistant E.coli was observed in southern Taiwan (42.7%), followed by northern Taiwan (35.5%) and eastern Taiwan (28.4%). Multi-drug resistant E. coli had prevalence rates of 51.9%, 66.3%, and 37.1% in the northern, southern, and eastern regions, respectively. MLST typing revealed that ST131 was the most prevalent type (11.8%). The prevalence of ST131 in northern, southern, and eastern Taiwan was 10.1%, 12.3%, and 13.2%, respectively. Risk factors analysis identified lower paternal education, overweight status, and non-vegetarian diet as statistical significance risk factors for third-generation cephalosporin-resistant E.coli. Conclusion: The fecal carriage rates of antibiotic-resistant E. coli among Taiwanese children were on the rise. This study found regional disparities in the prevalence of third-generation cephalosporin-resistant and multi-drug-resistant E. coli, with southern Taiwan having the highest prevalence. Lower paternal education, overweight, and non-vegetarian diet were the potential risk factors of third-generation cephalosporin-resistant E. coli in this study.

Keywords: Escherichia coli, fecal carriage, antimicrobial resistance, risk factors, prevalence

Procedia PDF Downloads 40
795 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers

Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker

Abstract:

The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.

Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic

Procedia PDF Downloads 303
794 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls

Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin

Abstract:

The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.

Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis

Procedia PDF Downloads 200
793 A Study of Anthropometric Correlation between Upper and Lower Limb Dimensions in Sudanese Population

Authors: Altayeb Abdalla Ahmed

Abstract:

Skeletal phenotype is a product of a balanced interaction between genetics and environmental factors throughout different life stages. Therefore, interlimb proportions are variable between populations. Although interlimb proportion indices have been used in anthropology in assessing the influence of various environmental factors on limbs, an extensive literature review revealed that there is a paucity of published research assessing interlimb part correlations and possibility of reconstruction. Hence, this study aims to assess the relationships between upper and lower limb parts and develop regression formulae to reconstruct the parts from one another. The left upper arm length, ulnar length, wrist breadth, hand length, hand breadth, tibial length, bimalleolar breadth, foot length, and foot breadth of 376 right-handed subjects, comprising 187 males and 189 females (aged 25-35 years), were measured. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then sex-specific simple and multiple linear regression models were used to estimate upper limb parts from lower limb parts and vice-versa. The results of this study indicated significant sexual dimorphism for all variables. The results indicated a significant correlation between the upper and lower limbs parts (p < 0.01). Linear and multiple (stepwise) regression equations were developed to reconstruct the limb parts in the presence of a single or multiple dimension(s) from the other limb. Multiple stepwise regression equations generated better reconstructions than simple equations. These results are significant in forensics as it can aid in identification of multiple isolated limb parts particularly during mass disasters and criminal dismemberment. Although a DNA analysis is the most reliable tool for identification, its usage has multiple limitations in undeveloped countries, e.g., cost, facility availability, and trained personnel. Furthermore, it has important implication in plastic and orthopedic reconstructive surgeries. This study is the only reported study assessing the correlation and prediction capabilities between many of the upper and lower dimensions. The present study demonstrates a significant correlation between the interlimb parts in both sexes, which indicates a possibility to reconstruction using regression equations.

Keywords: anthropometry, correlation, limb, Sudanese

Procedia PDF Downloads 272
792 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 72
791 Parametrical Analysis of Stain Removal Performance of a Washing Machine: A Case Study of Sebum

Authors: Ozcan B., Koca B., Tuzcuoglu E., Cavusoglu S., Efe A., Bayraktar S.

Abstract:

A washing machine is mainly used for removing any types of dirt and stains and also eliminating malodorous substances from textile surfaces. Stains originate from various sources from the human body to environmental contamination. Therefore, there are various methods for removing them. They are roughly classified into four different groups: oily (greasy) stains, particulate stains, enzymatic stains and bleachable (oxidizable) stains. Oily stains on clothes surfaces are a common result of being in contact with organic substances of the human body (e.g. perspiration, skin shedding and sebum) or by being exposed to an oily environmental pollutant (e.g. oily foods). Studies showed that human sebum is major component of oily soil found on the garments, and if it is aged under the several environmental conditions, it can generate obstacle yellow stains on the textile surface. In this study, a parametric study was carried out to investigate the key factors affecting the cleaning performance (specifically sebum removal performance) of a washing machine. These parameters are mechanical agitation percentage of tumble, consumed water and total washing period. A full factorial design of the experiment is used to capture all the possible parametric interactions using Minitab 2021 statistical program. Tests are carried out with commercial liquid detergent and 2 different types of sebum-soiled cotton and cotton + polyester fabrics. Parametric results revealed that for both test samples, increasing the washing time and the mechanical agitation could lead to a much better removal result of sebum. However, for each sample, the water amount had different outcomes. Increasing the water amount decreases the performance of cotton + polyester fabrics, while it is favorable for cotton fabric. Besides this, it was also discovered that the type of textile can greatly affect the sebum removal performance. Results showed that cotton + polyester fabrics are much easier to clean compared to cotton fabric

Keywords: laundry, washing machine, low-temperature washing, cold wash, washing efficiency index, sustainability, cleaning performance, stain removal, oily soil, sebum, yellowing

Procedia PDF Downloads 116
790 Decrease of Aerobic Capacity in Twenty Years in Lithuanian 11–18 Years-Old Youth

Authors: Arunas Emeljanovas, Brigita Mieziene, Tomas Venckunas

Abstract:

Background statement: Level of aerobic capacity in school age children provides important information about the current and future cardiovascular, skeletal and mental health. It is widely recognised that risk factors for modern chronic diseases of the adults have their origins in childhood and adolescence. The aim of the study was to analyse the trends of aerobic capacity across decades within groups of gender and age. Methods. The research included data of participants from the three nationally representative cohort studies performed in Lithuania in the years 1992, 2002 and 2012 among 11 to 18-years-old school children. Total of 18,294 school children were recruited for testing. Only those who had their body weight and height measured and completed 20 m shuttle endurance test were included in the analysis. The total number of students included in the analyses was 15,213 (7608 boys and 7605 girls). The permission to conduct the study was obtained from the Lithuanian Bioethics Committee (permission number BE-2-45). Major findings: Results are performed across gender and age groups. The comparison of shuttle endurance test, controlling for body mass index, indicated that in general there is a constant decrease of aerobic capacity across decades in both genders and age groups. The deterioration in aerobic capacity in boys accounted for 17 to 43 percent across age groups within decades. The biggest decrease was in 14 years-old boys. The deterioration in girls accounted for 19 to 37 percent across age groups with the highest decrease in 11 years-old girls. Though, girls had lower levels of aerobic capacity through all age groups and across three decades. Body mass index, as a covariate, accounted for up to six percent in deterioration of aerobic capacity. Final statement: The detected relationships may reflect the level and pattern of engagement in physical activity and sports where increased activity associates with superior performance in the tests because of the upregulated physiological function and instigated competitive/motivational level. The significance of the decade indirectly supports the importance of the recently changed activity among schoolchildren for this relationship.

Keywords: aerobic capacity, cardiovascular health, endurance, school age children

Procedia PDF Downloads 148
789 Comparative Effect of Self-Myofascial Release as a Warm-Up Exercise on Functional Fitness of Young Adults

Authors: Gopal Chandra Saha, Sumanta Daw

Abstract:

Warm-up is an essential component for optimizing performance in various sports before a physical fitness training session. This study investigated the immediate comparative effect of Self-Myofascial Release through vibration rolling (VR), non-vibration rolling (NVR), and static stretching as a part of a warm-up treatment on the functional fitness of young adults. Functional fitness is a classification of training that prepares the body for real-life movements and activities. For the present study 20male physical education students were selected as subjects. The age of the subjects was ranged from 20-25 years. The functional fitness variables undertaken in the present study were flexibility, muscle strength, agility, static and dynamic balance of the lower extremity. Each of the three warm-up protocol was administered on consecutive days, i.e. 24 hr time gap and all tests were administered in the morning. The mean and SD were used as descriptive statistics. The significance of statistical differences among the groups was measured by applying ‘F’-test, and to find out the exact location of difference, Post Hoc Test (Least Significant Difference) was applied. It was found from the study that only flexibility showed significant difference among three types of warm-up exercise. The observed result depicted that VR has more impact on myofascial release in flexibility in comparison with NVR and stretching as a part of warm-up exercise as ‘p’ value was less than 0.05. In the present study, within the three means of warm-up exercises, vibration roller showed better mean difference in terms of NVR, and static stretching exercise on functional fitness of young physical education practitioners, although the results were found insignificant in case of muscle strength, agility, static and dynamic balance of the lower extremity. These findings suggest that sports professionals and coaches may take VR into account for designing more efficient and effective pre-performance routine for long term to improve exercise performances. VR has high potential to interpret into an on-field practical application means.

Keywords: self-myofascial release, functional fitness, foam roller, physical education

Procedia PDF Downloads 114
788 Cognitive Performance and Physiological Stress during an Expedition in Antarctica

Authors: Andrée-Anne Parent, Alain-Steve Comtois

Abstract:

The Antarctica environment can be a great challenge for human exploration. Explorers need to be focused on the task and require the physical abilities to succeed and survive in complete autonomy in this hostile environment. The aim of this study was to observe cognitive performance and physiological stress with a biomarker (cortisol) and hand grip strength during an expedition in Antarctica. A total of 6 explorers were in complete autonomous exploration on the Forbidden Plateau in Antarctica to reach unknown summits during a 30 day period. The Stroop Test, a simple reaction time, and mood scale (PANAS) tests were performed every week during the expedition. Saliva samples were taken before sailing to Antarctica, the first day on the continent, after the mission on the continent and on the boat return trip. Furthermore, hair samples were taken before and after the expedition. The results were analyzed with SPSS using ANOVA repeated measures. The Stroop and mood scale results are presented in the following order: 1) before sailing to Antarctica, 2) the first day on the continent, 3) after the mission on the continent and 4) on the boat return trip. No significant difference was observed with the Stroop (759±166 ms, 850±114 ms, 772±179 ms and 833±105 ms, respectively) and the PANAS (39.5 ±5.7, 40.5±5, 41.8±6.9, 37.3±5.8 positive emotions, and 17.5±2.3, 18.2±5, 18.3±8.6, 15.8±5.4 negative emotions, respectively) (p>0.05). However, there appears to be an improvement at the end of the second week. Furthermore, the simple reaction time was significantly lower at the end of the second week, a moment where important decisions were taken about the mission, vs the week before (416±39 ms vs 459.8±39 ms respectively; p=0.030). Furthermore, the saliva cortisol was not significantly different (p>0.05) possibly due to important variations and seemed to reach a peak on the first day on the continent. However, the cortisol from the hair pre and post expedition increased significantly (2.4±0.5 pg/mg pre-expedition and 16.7±9.2 pg/mg post-expedition, p=0.013) showing important stress during the expedition. Moreover, no significant difference was observed on the grip strength except between after the mission on the continent and after the boat return trip (91.5±21 kg vs 85±19 kg, p=0.20). In conclusion, the cognitive performance does not seem to be affected during the expedition. Furthermore, it seems to increase for specific important events where the crew seemed to focus on the present task. The physiological stress does not seem to change significantly at specific moments, however, a global pre-post mission measure can be important and for this reason, for long-term missions, a pre-expedition baseline measure is important for crewmembers.

Keywords: Antarctica, cognitive performance, expedition, physiological adaptation, reaction time

Procedia PDF Downloads 225
787 Psychological Factors of Readiness of Defectologists to Professional Development: On the Example of Choosing an Educational Environment

Authors: Inna V. Krotova

Abstract:

The study pays special attention to the definition of the psychological potential of a specialist-defectologist, which determines his desire to increase the level of his or her professional competence. The group included participants of the educational environment – an additional professional program 'Technologies of psychological and pedagogical assistance for children with complex developmental disabilities' implemented by the department of defectology and clinical psychology of the KFU jointly with the Support Fund for the Deafblind people 'Co-Unity'. The purpose of our study was to identify the psychological aspects of the readiness of the specialist-defectologist to his or her professional development. The study assessed the indicators of psychological preparedness, and its four components were taken into account: motivational, cognitive, emotional and volitional. We used valid and standardized tests during the study. As a result of the factor analysis of data received (from Extraction Method: Principal Component Analysis, Rotation Method: Varimax with Kaiser Normalization, Rotation converged in 12 iterations), there were identified three factors with maximum factor load from 24 indices, and their correlation coefficients with other indicators were taken into account at the level of reliability p ≤ 0.001 and p ≤ 0.01. Thus the system making factor was determined – it’s a 'motivation to achieve success'; it formed a correlation galaxy with two other factors: 'general internality' and 'internality in the field of achievements', as well as with such psychological indicators as 'internality in the field of family relations', 'internality in the field of interpersonal relations 'and 'low self-control-high self-control' (the names of the scales used is the same as names in the analysis methods. In conclusion of the article, we present some proposals to take into account the psychological model of readiness of specialists-defectologists for their professional development, to stimulate the growth of their professional competence. The study has practical value for all providers of special education and organizations that have their own specialists-defectologists, teachers-defectologists, teachers for correctional and ergotherapeutic activities, specialists working in the field of correctional-pedagogical activity (speech therapists) to people with special needs who need true professional support.

Keywords: psychological readiness, defectologist, professional development, psychological factors, special education, professional competence, innovative educational environment

Procedia PDF Downloads 154
786 Systems Intelligence in Management (High Performing Organizations and People Score High in Systems Intelligence)

Authors: Raimo P. Hämäläinen, Juha Törmänen, Esa Saarinen

Abstract:

Systems thinking has been acknowledged as an important approach in the strategy and management literature ever since the seminal works of Ackhoff in the 1970´s and Senge in the 1990´s. The early literature was very much focused on structures and organizational dynamics. Understanding systems is important but making improvements also needs ways to understand human behavior in systems. Peter Senge´s book The Fifth Discipline gave the inspiration to the development of the concept of Systems Intelligence. The concept integrates the concepts of personal mastery and systems thinking. SI refers to intelligent behavior in the context of complex systems involving interaction and feedback. It is a competence related to the skills needed in strategy and the environment of modern industrial engineering and management where people skills and systems are in an increasingly important role. The eight factors of Systems Intelligence have been identified from extensive surveys and the factors relate to perceiving, attitude, thinking and acting. The personal self-evaluation test developed consists of 32 items which can also be applied in a peer evaluation mode. The concept and test extend to organizations too. One can talk about organizational systems intelligence. This paper reports the results of an extensive survey based on peer evaluation. The results show that systems intelligence correlates positively with professional performance. People in a managerial role score higher in SI than others. Age improves the SI score but there is no gender difference. Top organizations score higher in all SI factors than lower ranked ones. The SI-tests can also be used as leadership and management development tools helping self-reflection and learning. Finding ways of enhancing learning organizational development is important. Today gamification is a new promising approach. The items in the SI test have been used to develop an interactive card game following the Topaasia game approach. It is an easy way of engaging people in a process which both helps participants see and approach problems in their organization. It also helps individuals in identifying challenges in their own behavior and in improving in their SI.

Keywords: gamification, management competence, organizational learning, systems thinking

Procedia PDF Downloads 73
785 Seismic Response of Structure Using a Three Degree of Freedom Shake Table

Authors: Ketan N. Bajad, Manisha V. Waghmare

Abstract:

Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.

Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed

Procedia PDF Downloads 116
784 A Non-Invasive Neonatal Jaundice Screening Device Measuring Bilirubin on Eyes

Authors: Li Shihao, Dieter Trau

Abstract:

Bilirubin is a yellow substance that is made when the body breaks down old red blood cells. High levels of bilirubin can cause jaundice, a condition that makes the newborn's skin and the white part of the eyes look yellow. Jaundice is a serial-killer in developing countries in Southeast Asia such as Myanmar and most parts of Africa where jaundice screening is largely unavailable. Worldwide, 60% of newborns experience infant jaundice. One in ten will require therapy to prevent serious complications and lifelong neurologic sequelae. Limitations of current solutions: - Blood test: Blood tests are painful may largely unavailable in poor areas of developing countries, and also can be costly and unsafe due to the insufficient investment and lack of access to health care systems. - Transcutaneous jaundice-meter: 1) can only provide reliable results to caucasian newborns, due to skin pigmentations since current technologies measure bilirubin by the color of the skin. Basically, the darker the skin is, the harder to measure, 2) current jaundice meters are not affordable for most underdeveloped areas in Africa like Kenya and Togo, 3) fat tissue under the skin also influences the accuracy, which will give overestimated results, 4) current jaundice meters are not reliable after treatment (phototherapy) because bilirubin levels underneath the skin will be reduced first, while overall levels may be quite high. Thus, there is an urgent need for a low-cost non-invasive device, which can be effective not only for caucasian babies but also Asian and African newborns, to save lives at the most vulnerable time and prevent any complications like brain damage. Instead of measuring bilirubin on skin, we proposed a new method to do the measurement on the sclera, which can avoid the difference of skin pigmentations and ethnicities, due to the necessity for the sclera to be white regardless of racial background. This is a novel approach for measuring bilirubin by an optical method of light reflection off the white part of the eye. Moreover, the device is connected to a smart device, which can provide a user-friendly interface and the ability to record the clinical data continuously A disposable eye cap will be provided avoiding contamination and fixing the distance to the eye.

Keywords: Jaundice, bilirubin, non-invasive, sclera

Procedia PDF Downloads 217
783 Microfiber Release During Laundry Under Different Rinsing Parameters

Authors: Fulya Asena Uluç, Ehsan Tuzcuoğlu, Songül Bayraktar, Burak Koca, Alper Gürarslan

Abstract:

Microplastics are contaminants that are widely distributed in the environment with a detrimental ecological effect. Besides this, recent research has proved the existence of microplastics in human blood and organs. Microplastics in the environment can be divided into two main categories: primary and secondary microplastics. Primary microplastics are plastics that are released into the environment as microscopic particles. On the other hand, secondary microplastics are the smaller particles that are shed as a result of the consumption of synthetic materials in textile products as well as other products. Textiles are the main source of microplastic contamination in aquatic ecosystems. Laundry of synthetic textiles (34.8%) accounts for an average annual discharge of 3.2 million tons of primary microplastics into the environment. Recently, microfiber shedding from laundry research has gained traction. However, no comprehensive study was conducted from the standpoint of rinsing parameters during laundry to analyze microfiber shedding. The purpose of the present study is to quantify microfiber shedding from fabric under different rinsing conditions and determine the effective rinsing parameters on microfiber release in a laundry environment. In this regard, a parametric study is carried out to investigate the key factors affecting the microfiber release from a front-load washing machine. These parameters are the amount of water used during the rinsing step and the spinning speed at the end of the washing cycle. Minitab statistical program is used to create a design of the experiment (DOE) and analyze the experimental results. Tests are repeated twice and besides the controlled parameters, other washing parameters are kept constant in the washing algorithm. At the end of each cycle, released microfibers are collected via a custom-made filtration system and weighted with precision balance. The results showed that by increasing the water amount during the rinsing step, the amount of microplastic released from the washing machine increased drastically. Also, the parametric study revealed that increasing the spinning speed results in an increase in the microfiber release from textiles.

Keywords: front load, laundry, microfiber, microfiber release, microfiber shedding, microplastic, pollution, rinsing parameters, sustainability, washing parameters, washing machine

Procedia PDF Downloads 71
782 poly(N-Isopropylacrylamide)-Polyvinyl Alcohol Semi-Interpenetrating Network Hydrogel for Wound Dressing

Authors: Zi-Yan Liao, Shan-Yu Zhang, Ya-Xian Lin, Ya-Lun Lee, Shih-Chuan Huang, Hong-Ru Lin

Abstract:

Traditional wound dressings, such as gauze, bandages, etc., are easy to adhere to the tissue fluid exuded from the wound, causing secondary damage to the wound during removal. This study takes this as the idea to develop a hydrogel dressing, to explore that the dressing will not cause secondary damage to the wound when it is torn off, and at the same time, create an environment conducive to wound healing. First, the temperature-sensitive material N-isopropylacrylamide (NIPAAm) was used as the substrate. Due to its low mechanical properties, the hydrogel would break due to pulling during human activities. Polyvinyl alcohol (PVA) interpenetrates into it to enhance the mechanical properties, and a semi-interpenetration (semi-IPN) composed of poly(N-isopropylacrylamide) (PNIPAAm) and polyvinyl alcohol (PVA) was prepared by free radical polymerization. PNIPAAm was cross-linked with N,N'-methylenebisacrylamide (NMBA) in an ice bath in the presence of linear PVA, and tetramethylhexamethylenediamine (TEMED) was added as a promoter to speed up the gel formation. The polymerization stage was carried out at 16°C for 17 hours and washed with distilled water for three days after gel formation, and the water was changed several times in the middle to complete the preparation of semi-IPN hydrogel. Finally, various tests were used to analyze the effects of different ratios of PNIPAAm and PVA on semi-IPN hydrogels. In the swelling test, it was found that the maximum swelling ratio can reach about 50% under the environment of 21°C, and the higher the ratio of PVA, the more water can be absorbed. The saturated moisture content test results show that when more PVA is added, the higher saturated water content. The water vapor transmission rate test results show that the value of the semi-IPN hydrogel is about 57 g/m²/24hr, which is not much related to the proportion of PVA. It is found in the LCST test compared with the PNIPAAm hydrogel; the semi-IPN hydrogel possesses the same critical solution temperature (30-35°C). The semi-IPN hydrogel prepared in this study has a good effect on temperature response and has the characteristics of thermal sensitivity. It is expected that after improvement, it can be used in the treatment of surface wounds, replacing the traditional dressing shortcoming.

Keywords: hydrogel, N-isopropylacrylamide, polyvinyl alcohol, hydrogel wound dressing, semi-interpenetrating polymer network

Procedia PDF Downloads 62
781 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 105
780 Production of High Purity Cellulose Products from Sawdust Waste Material

Authors: Simiksha Balkissoon, Jerome Andrew, Bruce Sithole

Abstract:

Approximately half of the wood processed in the Forestry, Timber, Pulp and Paper (FTPP) sector is accumulated as waste. The concept of a “green economy” encourages industries to employ revolutionary, transformative technologies to eliminate waste generation by exploring the development of new value chains. The transition towards an almost paperless world driven by the rise of digital media has resulted in a decline in traditional paper markets, prompting the FTTP sector to reposition itself and expand its product offerings by unlocking the potential of value-adding opportunities from renewable resources such as wood to generate revenue and mitigate its environmental impact. The production of valuable products from wood waste such as sawdust has been extensively explored in recent years. Wood components such as lignin, cellulose and hemicelluloses, which can be extracted selectively by chemical processing, are suitable candidates for producing numerous high-value products. In this study, a novel approach to produce high-value cellulose products, such as dissolving wood pulp (DWP), from sawdust was developed. DWP is a high purity cellulose product used in several applications such as pharmaceutical, textile, food, paint and coatings industries. The proposed approach demonstrates the potential to eliminate several complex processing stages, such as pulping and bleaching, which are associated with traditional commercial processes to produce high purity cellulose products such as DWP, making it less chemically energy and water-intensive. The developed process followed the path of experimentally designed lab tests evaluating typical processing conditions such as residence time, chemical concentrations, liquid-to-solid ratios and temperature, followed by the application of suitable purification steps. Characterization of the product from the initial stage was conducted using commercially available DWP grades as reference materials. The chemical characteristics of the products thus far have shown similar properties to commercial products, making the proposed process a promising and viable option for the production of DWP from sawdust.

Keywords: biomass, cellulose, chemical treatment, dissolving wood pulp

Procedia PDF Downloads 166
779 Fast Transient Workflow for External Automotive Aerodynamic Simulations

Authors: Christina Peristeri, Tobias Berg, Domenico Caridi, Paul Hutcheson, Robert Winstanley

Abstract:

In recent years the demand for rapid innovations in the automotive industry has led to the need for accelerated simulation procedures while retaining a detailed representation of the simulated phenomena. The project’s aim is to create a fast transient workflow for external aerodynamic CFD simulations of road vehicles. The geometry used was the SAE Notchback Closed Cooling DrivAer model, and the simulation results were compared with data from wind tunnel tests. The meshes generated for this study were of two types. One was a mix of polyhedral cells near the surface and hexahedral cells away from the surface. The other was an octree hex mesh with a rapid method of fitting to the surface. Three different grid refinement levels were used for each mesh type, with the biggest total cell count for the octree mesh being close to 1 billion. A series of steady-state solutions were obtained on three different grid levels using a pseudo-transient coupled solver and a k-omega-based RANS turbulence model. A mesh-independent solution was found in all cases with a medium level of refinement with 200 million cells. Stress-Blended Eddy Simulation (SBES) was chosen for the transient simulations, which uses a shielding function to explicitly switch between RANS and LES mode. A converged pseudo-transient steady-state solution was used to initialize the transient SBES run that was set up with the SIMPLEC pressure-velocity coupling scheme to reach the fastest solution (on both CPU & GPU solvers). An important part of this project was the use of FLUENT’s Multi-GPU solver. Tesla A100 GPU has been shown to be 8x faster than an Intel 48-core Sky Lake CPU system, leading to significant simulation speed-up compared to the traditional CPU solver. The current study used 4 Tesla A100 GPUs and 192 CPU cores. The combination of rapid octree meshing and GPU computing shows significant promise in reducing time and hardware costs for industrial strength aerodynamic simulations.

Keywords: CFD, DrivAer, LES, Multi-GPU solver, octree mesh, RANS

Procedia PDF Downloads 95
778 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine

Authors: Luphumlo Ncanywa, Paul Watts

Abstract:

Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.

Keywords: antimalarial, flow, lumefantrine, synthesis

Procedia PDF Downloads 175
777 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 80
776 Is Electricity Consumption Stationary in Turkey?

Authors: Eyup Dogan

Abstract:

The number of research articles analyzing the integration properties of energy variables has rapidly increased in the energy literature for about a decade. The stochastic behaviors of energy variables are worth knowing due to several reasons. For instance, national policies to conserve or promote energy consumption, which should be taken as shocks to energy consumption, will have transitory effects in energy consumption if energy consumption is found to be stationary in one country. Furthermore, it is also important to know the order of integration to employ an appropriate econometric model. Despite being an important subject for applied energy (economics) and having a huge volume of studies, several known limitations still exist with the existing literature. For example, many of the studies use aggregate energy consumption and national level data. In addition, a huge part of the literature is either multi-country studies or solely focusing on the U.S. This is the first study in the literature that considers a form of energy consumption by sectors at sub-national level. This research study aims at investigating unit root properties of electricity consumption for 12 regions of Turkey by four sectors in addition to total electricity consumption for the purpose of filling the mentioned limits in the literature. In this regard, we analyze stationarity properties of 60 cases . Because the use of multiple unit root tests make the results robust and consistent, we apply Dickey-Fuller unit root test based on Generalized Least Squares regression (DFGLS), Phillips-Perron unit root test (PP) and Zivot-Andrews unit root test with one endogenous structural break (ZA). The main finding of this study is that electricity consumption is trend stationary in 7 cases according to DFGLS and PP, whereas it is stationary process in 12 cases when we take into account the structural change by applying ZA. Thus, shocks to electricity consumption have transitory effects in those cases; namely, agriculture in region 1, region 4 and region 7, industrial in region 5, region 8, region 9, region 10 and region 11, business in region 4, region 7 and region 9, total electricity consumption in region 11. Regarding policy implications, policies to decrease or stimulate the use of electricity have a long-run impact on electricity consumption in 80% of cases in Turkey given that 48 cases are non-stationary process. On the other hand, the past behavior of electricity consumption can be used to predict the future behavior of that in 12 cases only.

Keywords: unit root, electricity consumption, sectoral data, subnational data

Procedia PDF Downloads 388
775 Structure-Reactivity Relationship of Some Rhᴵᴵᴵ and Osᴵᴵᴵ Complexes with N-Inert Ligands in Ionic Liquids

Authors: Jovana Bogojeski, Dusan Cocic, Nenad Jankovic, Angelina Petrovic

Abstract:

Kinetically-inert transition metal complexes, such as Rh(III) and Os(III) complexes, attract increasing attention as leading scaffolds for the development of potential pharmacological agents due to their inertness and stability. Therefore, we have designed and fully characterized a few novel rhodium(III) and osmium(III) complexes with a tridentate nitrogen−donor chelate system. For some complexes, the crystal X-ray structure analysis was performed. Reactivity of the newly synthesized complexes towards small biomolecules, such as L-methionine (L-Met), guanosine-5’-monophosphate (5’-GMP), and glutathione (GSH) has been examined. Also, the reactivity of these complexes towards the DNA/RNA (Ribonucleic acid) duplexes was investigated. Obtained results show that the newly synthesized complexes exhibit good affinity towards the studied ligands. Results also show that the complexes react faster with the RNA duplex than with the DNA and that in the DNA duplex reaction is faster with 15mer GG than with the 22mer GG. The UV-Vis (Ultraviolet-visible spectroscopy) is absorption spectroscopy, and the EB (Ethidium bromide) displacement studies were used to examine the interaction of these complexes with CT-DNA and BSA (Bovine serum albumin). All studied complex showed good interaction ability with both the DNA and BSA. Furthermore, the DFT (Density-functional theory) calculation and docking studies were performed. The impact of the metal complex on the cytotoxicity was tested by MTT assay (a colorimetric assay for assessing cell metabolic activity) on HCT-116 lines (human colon cancer cell line). In addition, all these tests were repeated in the presence of several water-soluble biologically active ionic liquids. Attained results indicate that the ionic liquids increase the activity of the investigated complexes. All obtained results in this study imply that the introduction of different spectator ligand can be used to improve the reactivity of rhodium(III) and osmium(III) complexes. Finally, these results indicate that the examined complexes show reactivity characteristics needed for potential anti-tumor agents, with possible targets being both the DNA and proteins. Every new contribution in this field is highly warranted due to the current lack of clinically used Metallo-based alternatives to cisplatin.

Keywords: biomolecules, ionic liquids, osmium(III), rhodium(III)

Procedia PDF Downloads 129
774 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 232
773 Effectiveness of Medication and Non-Medication Therapy on Working Memory of Children with Attention Deficit and Hyperactivity Disorder

Authors: Mohaammad Ahmadpanah, Amineh Akhondi, Mohammad Haghighi, Ali Ghaleiha, Leila Jahangard, Elham Salari

Abstract:

Background: Working memory includes the capability to keep and manipulate information in a short period of time. This capability is the basis of complicated judgments and has been attended to as the specific and constant character of individuals. Children with attention deficit and hyperactivity are among the people suffering from deficiency in the active memory, and this deficiency has been attributed to the problem of frontal lobe. This study utilizes a new approach with suitable tasks and methods for training active memory and assessment of the effects of the trainings. Participants: The children participating in this study were of 7-15 year age, who were diagnosed by the psychiatrist and psychologist as hyperactive and attention deficit based on DSM-IV criteria. The intervention group was consisted of 8 boys and 6 girls with the average age of 11 years and standard deviation of 2, and the control group was consisted of 2 girls and 5 boys with an average age of 11.4 and standard deviation of 3. Three children in the test group and two in the control group were under medicinal therapy. Results: Working memory training meaningfully improved the performance in not-trained areas as visual-spatial working memory as well as the performance in Raven progressive tests which are a perfect example of non-verbal, complicated reasoning tasks. In addition, motional activities – measured based on the number of head movements during computerized measuring program – was meaningfully reduced in the medication group. The results of the second test showed that training similar exercise to teenagers and adults results in the improvement of cognition functions, as in hyperactive people. Discussion: The results of this study showed that the performance of working memory is improved through training, and these trainings are extended and generalized in other areas of cognition functions not receiving any training. Trainings resulted in the improvement of performance in the tasks related to prefrontal. They had also a positive and meaningful impact on the moving activities of hyperactive children.

Keywords: attention deficit hyperactivity disorder, working memory, non-medical treatment, children

Procedia PDF Downloads 342
772 Flammability and Smoke Toxicity of Rainscreen Façades

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Four façade systems were tested using a reduced height BS 8414-2 (5 m) test rig. An L-shaped masonry test wall was clad with three types of insulation and an aluminum composite panel with a non-combustible filling (meeting Euroclass A2). A large (3 MW) wooden crib was ignited in a recess at the base of the L, and the fire was allowed to burn for 30 minutes. Air velocity measurements and gas samples were taken from the main ventilation duct and also a small additional ventilation duct, like those in an apartment bathroom or kitchen. This provided a direct route of travel for smoke from the building façade to a theoretical room using a similar design to many high-rise buildings where the vent is connected to (approximately) 30 m³ rooms. The times to incapacitation and lethality of the effluent were calculated for both the main exhaust vent and for a vent connected to a theoretical 30 m³ room. The rainscreen façade systems tested were the common combinations seen in many tower blocks across the UK. Three tests using ACM A2 with Stonewool, Phenolic foam, and Polyisocyanurate (PIR) foam. A fourth test was conducted with PIR and ACM-PE (polyethylene core). Measurements in the main exhaust duct were representative of the effluent from the burning wood crib. FEDs showed incapacitation could occur up to 30 times quicker with combustible insulation than non-combustible insulation, with lethal gas concentrations accumulating up to 2.7 times faster than other combinations. The PE-cored ACM/PIR combination produced a ferocious fire, resulting in the termination of the test after 13.5 minutes for safety reasons. Occupants of the theoretical room in the PIR/ACM A2 test reached a FED of 1 after 22 minutes; for PF/ACM A2, this took 25 minutes, and for stone wool, a lethal dose measurement of 0.6 was reached at the end of the 30-minute test. In conclusion, when measuring smoke toxicity in the exhaust duct, there is little difference between smoke toxicity measurements between façade systems. Toxicity measured in the main exhaust is largely a result of the wood crib used to ignite the façade system. The addition of a vent allowed smoke toxicity to be quantified in the cavity of the façade, providing a realistic way of measuring the toxicity of smoke that could enter an apartment from a façade fire.

Keywords: smoke toxicity, large-scale testing, BS8414, FED

Procedia PDF Downloads 42
771 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 405
770 Biogas Production Using Water Hyacinth as a Means of Waste Management Control at Hartbeespoort Dam, South Africa

Authors: Trevor Malambo Simbayi, Diane Hildebrandt, Tonderayi Matambo

Abstract:

The rapid growth of population in recent decades has resulted in an increased need for energy to meet human activities. As energy demands increase, the need for other sources of energy other than fossil fuels, increases in turn. Furthermore, environmental concerns such as global warming due to the use of fossil fuels, depleting fossil fuel reserves and the rising cost of oil have contributed to an increased interest in renewables sources of energy. Biogas is a renewable source of energy produced through the process of anaerobic digestion (AD) and it offers a two-fold solution; it provides an environmentally friendly source of energy and its production helps to reduce the amount of organic waste taken to landfills. This research seeks to address the waste management problem caused by an aquatic weed called water hyacinth (Eichhornia crassipes) at the Hartbeespoort (Harties) Dam in the North West Province of South Africa, through biogas production of the weed. Water hyacinth is a category 1 invasive species and it is deemed to be the most problematic aquatic weed. This weed is said to double its size in the space of five days. Eutrophication in the Hartbeespoort Dam has manifested itself through the excessive algae bloom and water hyacinth infestation. A large amount of biomass from water hyacinth and algae are generated per annum from the two hundred hectare surface area of the dam exposed to the sun. This biomass creates a waste management problem. Water hyacinth when in full bloom can cover nearly half of the surface of Hartbeespoort Dam. The presence of water hyacinth in the dam has caused economic and environmental problems. Economic activities such as fishing, boating, and recreation, are hampered by the water hyacinth’s prolific growth. This research proposes the use of water hyacinth as a feedstock or substrate for biogas production in order to find an economic and environmentally friendly means of waste management for the communities living around the Hartbeespoort Dam. In order to achieve this objective, water hyacinth will be collected from the dam and it will be mechanically pretreated before anaerobic digestion. Pretreatment is required for lignocellulosic materials like water hyacinth because such materials are called recalcitrant solid materials. Cow manure will be employed as a source of microorganisms needed for biogas production to occur. Once the water hyacinth and the cow dung are mixed, they will be placed in laboratory anaerobic reactors. Biogas production will be monitored daily through the downward displacement of water. Characterization of the substrates (cow manure and water hyacinth) to determine the nitrogen, sulfur, carbon and hydrogen, total solids (TS) and volatile solids (VS). Liquid samples from the anaerobic digesters will be collected and analyzed for volatile fatty acids (VFAs) composition by means of a liquid gas chromatography machine.

Keywords: anaerobic digestion, biogas, waste management, water hyacinth

Procedia PDF Downloads 171