Search results for: variable parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4010

Search results for: variable parameter

470 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells

Authors: Victorita Radulescu

Abstract:

Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.

Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils

Procedia PDF Downloads 125
469 A Study of the Depression Status of Asian American Adolescents

Authors: Selina Lin, Justin M Fan, Vincent Zhang, Cindy Chen, Daniel Lam, Jason Yan, Ning Zhang

Abstract:

Depression is one of the most common mental disorders in the United States, and past studies have shown a concerning increase in the rates of depression in youth populations over time. Furthermore, depression is an especially important issue for Asian Americans because of the anti-Asian violence taking place during the COVID-19 pandemic. While Asian American adolescents are reluctant to seek help for mental health issues, past research has found a prevalence of depressive symptoms in them that have yet to be fully investigated. There have been studies conducted to understand and observe the impacts of multifarious factors influencing the mental well-being of Asian American adolescents; however, they have been generally limited to qualitative investigation, and very few have attempted to quantitatively evaluate the relationship between depression levels and a comprehensive list of factors for those levels at the same time. To better quantify these relationships, this project investigated the prevalence of depression in Asian American teenagers mainly from the Greater Philadelphia Region, aged 12 to 19, and, with an anonymous survey, asked participants 48 multiple-choice questions pertaining to demographic information, daily behaviors, school life, family life, depression levels (quantified by the PHQ-9 assessment), school and family support against depression. Each multiple-choice question was assigned as a factor and variable for statistical and dominance analysis to determine the most influential factors on depression levels of Asian American adolescents. The results were validated via Bootstrap analysis and t-tests. While certain influential factors identified in this survey are consistent with the literature, such as parent-child relationship and peer pressure, several dominant factors were relatively overlooked in the past. These factors include the parents’ relationship with each other, the satisfaction with body image, sex identity, support from the family and support from the school. More than 25% of participants desired more support from their families and schools in handling depression issues. This study implied that it is beneficial for Asian American parents and adolescents to take programs on parents’ relationships with each other, parent-child communication, mental health, and sexual identity. A culturally inclusive school environment and more accessible mental health services would be helpful for Asian American adolescents to combat depression. This survey-based study paved the way for further investigation of effective approaches for helping Asian American adolescents against depression.

Keywords: Asian American adolescents, depression, dominance analysis, t-test, bootstrap analysis

Procedia PDF Downloads 110
468 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 122
467 Numerical Modeling and Experimental Analysis of a Pallet Isolation Device to Protect Selective Type Industrial Storage Racks

Authors: Marcelo Sanhueza Cartes, Nelson Maureira Carsalade

Abstract:

This research evaluates the effectiveness of a pallet isolation device for the protection of selective-type industrial storage racks. The device works only in the longitudinal direction of the aisle, and it is made up of a platform installed on the rack beams. At both ends, the platform is connected to the rack structure by means of a spring-damper system working in parallel. A system of wheels is arranged between the isolation platform and the rack beams in order to reduce friction, decoupling of the movement and improve the effectiveness of the device. The latter is evaluated by the reduction of the maximum dynamic responses of basal shear load and story drift in relation to those corresponding to the same rack with the traditional construction system. In the first stage, numerical simulations of industrial storage racks were carried out with and without the pallet isolation device. The numerical results allowed us to identify the archetypes in which it would be more appropriate to carry out experimental tests, thus limiting the number of trials. In the second stage, experimental tests were carried out on a shaking table to a select group of full-scale racks with and without the proposed device. The movement simulated by the shaking table was based on the Mw 8.8 magnitude earthquake of February 27, 2010, in Chile, registered at the San Pedro de la Paz station. The peak ground acceleration (PGA) was scaled in the frequency domain to fit its response spectrum with the design spectrum of NCh433. The experimental setup contemplates the installation of sensors to measure relative displacement and absolute acceleration. The movement of the shaking table with respect to the ground, the inter-story drift of the rack and the pallets with respect to the rack structure were recorded. Accelerometers redundantly measured all of the above in order to corroborate measurements and adequately capture low and high-frequency vibrations, whereas displacement and acceleration sensors are respectively more reliable. The numerical and experimental results allowed us to identify that the pallet isolation period is the variable with the greatest influence on the dynamic responses considered. It was also possible to identify that the proposed device significantly reduces both the basal cut and the maximum inter-story drift by up to one order of magnitude.

Keywords: pallet isolation system, industrial storage racks, basal shear load, interstory drift.

Procedia PDF Downloads 50
466 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 215
465 Platelet Volume Indices: Emerging Markers of Diabetic Thrombocytopathy

Authors: Mitakshara Sharma, S. K. Nema

Abstract:

Diabetes mellitus (DM) is metabolic disorder prevalent in pandemic proportions, incurring significant morbidity and mortality due to associated vascular angiopathies. Platelet related thrombogenesis plays key role in pathogenesis of these complications. Most patients with type II DM suffer from preventable vascular complications and early diagnosis can help manage these successfully. These complications are attributed to platelet activation which can be recognised by the increase in Platelet Volume Indices(PVI) viz. Mean Platelet Volume(MPV) and Platelet Distribution Width(PDW). This study was undertaken with the aim of finding a relationship between PVI and vascular complications of Diabetes mellitus, their importance as a causal factor in these complications and use as markers for early detection of impending vascular complications in patients with poor glycaemic status. This is a cross-sectional study conducted for 2 years with total 930 subjects. The subjects were segregated in 03 groups on basis of glycosylated haemoglobin (HbA1C) as: - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose(IFG) with 300 individuals in IFG and non-diabetic group & 330 individuals in diabetic group. The diabetic group was further divided into two groups: - (a) Diabetic subjects with diabetes related vascular complications (b) Diabetic subjects without diabetes related vascular complications. Samples for HbA1C and platelet indices were collected using Ethylene diamine tetracetic acid(EDTA) as anticoagulant and processed on SYSMEX-XS-800i autoanalyser. The study revealed stepwise increase in PVI from non-diabetics to IFG to diabetics. MPV and PDW of diabetics, IFG and non diabetics were 17.60 ± 2.04, 11.76 ± 0.73, 9.93 ± 0.64 and 19.17 ± 1.48, 15.49 ± 0.67, 10.59 ± 0.67 respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). MPV & PDW of subjects with and without diabetes related complications were (15.14 ± 1.04) fl & (17.51±0.39) fl and (18.96 ± 0.83) fl & (20.09 ± 0.98) fl respectively with a significant p value 0.00.The current study demonstrates raised platelet indices & reduced platelet counts in association with rising glycaemic levels and diabetes related vascular complications across various study groups & showed that platelet morphology is altered with increasing glycaemic levels. These changes can be known by measurements of PVI which are important, simple, cost effective, effortless tool & indicators of impending vascular complications in patients with deranged glycaemic control. PVI should be researched and explored further as surrogate markers to develop a clinical tool for early recognition of vascular changes related to diabetes and thereby help prevent them. They can prove to be more useful in developing countries with limited resources. This study is multi-parameter, comprehensive with adequately powered study design and represents pioneering effort in India on account of the fact that both Platelet indices (MPV & PDW) along with platelet count have been evaluated together for the first time in Diabetics, non diabetics, patients with IFG and also in the diabetic patients with and without diabetes related vascular complications.

Keywords: diabetes, HbA1C, IFG, MPV, PDW, PVI

Procedia PDF Downloads 217
464 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study

Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker

Abstract:

Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.

Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation

Procedia PDF Downloads 112
463 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels

Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi

Abstract:

Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.

Keywords: glutamine, resistance traning, immuglobulins, cortisol

Procedia PDF Downloads 451
462 Analysis of Determinants of Growth of Small and Medium Enterprises in Kwara State, Nigeria

Authors: Hussaini Tunde Subairu

Abstract:

Small and Medium Enterprises (SMEs) sectors serve as catalyst for employment generation, national growth, poverty reduction and economic development in developing and developed countries. However, in Nigeria despite copious and plethora of government policies and stimulus schemes directed at SMEs, the sector is still characterized by high rate of failure and discontinuities. This study therefore investigated owners/managers profile, firms characteristics and external factors as possible determinants of SMEs growth from selected SMEs in Kwara State. Primary data were sourced from 200 SMEs respondents registered with the National Association of Small and Medium Enterprises (NASMES) in Kwara State Central Senatorial District. Multiple Regressions Analysis (MRA) was used to analyze the relationship between dependent and independent variables, and pair wise correlation was employed to examine the relationship among independent variables. The Analysis of Variable (ANOVA) was employed to indicate the overall significant of the model The findings revealed that Analysis of variance (ANOVA) put the value of F-statistics at 420.45 and p-value at 0.000 was significant. The values of R2 and Adjusted R2 of 0.9643 and 0.9620 respectively suggested that 96 percent of variations in employment growth were explained by the explanatory variables. The level of technical and managerial education has t- value of 24.14 and p-value of 0.001, length of managers/owners experience in similar trade with t- value of 21.37 and p-value of 0.001, age of managers/owners with t- value of 42.98 and p-value of 0.001, firm age with t- value of 25.91 and p-value of 0.001, numbers of firms in a cluster with t- value of 7.20 and p-value of 0.001, access to formal finance with t-value of 5.56 and p-value of 0.001, firm technology innovation with t- value of 25.32 and p-value of 0.01, institutional support with t- value of 18.89 and p-value of 0.01, globalization with t- value of 9.78 and p-value of 0.01, and infrastructure with t-value of 10.75 and p-value of 0.01. The result also indicated that initial size has t-value of -1.71 and p-value of 0.090 which is consistent with Gibrat’s Law. The study concluded that owners/managers profile, firm specific characteristics and external factors substantially influenced employment growths of SMEs in the study area. Therefore, policy implication should enhance human capital development of SMEs owners/managers, and strengthen fiscal policy thrust through imposition on tariff regime to minimize effect of globalization. Governments at all level must support SMEs growth radically and enhance institutional support for SMEs growth and radically and significantly upgrading key infrastructure as rail/roads, rail, telecommunications, water and power.

Keywords: external factors, firm specific characteristics, owners / manager profile, small and medium enterprises

Procedia PDF Downloads 218
461 One Year Follow up of Head and Neck Paragangliomas: A Single Center Experience

Authors: Cecilia Moreira, Rita Paiva, Daniela Macedo, Leonor Ribeiro, Isabel Fernandes, Luis Costa

Abstract:

Background: Head and neck paragangliomas are a rare group of tumors with a large spectrum of clinical manifestations. The approach to evaluate and treat these lesions has evolved over the last years. Surgery was the standard for the approach of these patients, but nowadays new techniques of imaging and radiation therapy changed that paradigm. Despite advances in treating, the growth potential and clinical outcome of individual cases remain largely unpredictable. Objectives: Characterization of our institutional experience with clinical management of these tumors. Methods: This was a cross-sectional study of patients followed in our institution between 01 January and 31 December 2017 with paragangliomas of the head and neck and cranial base. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic workup, treatment modality, tumor control and recurrence, complications of treatment and hereditary status were collected and summarized. Results: A total of four female patients were followed between 01 January and 31 December 2017 in our institution. The mean age of our cohort was 53 (± 16.1) years. The primary locations were at the level of the tympanic jug (n=2, 50%) and carotid body (n=2, 50%), and only one of the tumors of the carotid body presented pulmonary metastasis at the time of diagnosis. None of the lesions were catecholamine-secreting. Two patients underwent genetic testing, with no mutations identified. The initial clinical presentation was variable highlighting the decrease of visual acuity and headache as symptoms present in all patients. In one of the cases, loss of all teeth of the lower jaw was the presenting symptomatology. Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were employed as treatment approaches according to anatomical location and resectability of lesions. As post-therapeutic sequels the persistence of tinnitus and disabling pain stands out, presenting one of the patients neuralgia of the glossopharyngeal. Currently, all patients are under regular surveillance with a median follow up of 10 months. Conclusion: Ultimately, clinical management of these tumors remains challenging owing to heterogeneity in clinical presentation, the existence of multiple treatment alternatives, and potential to cause serious detriment to critical functions and consequently interference with the quality of life of the patients.

Keywords: clinical outcomes, head and neck, management, paragangliomas

Procedia PDF Downloads 123
460 Reading Informational or Fictional Texts to Students: Choices and Perceptions of Preschool and Primary Grade Teachers

Authors: Anne-Marie Dionne

Abstract:

Teacher reading aloud to students is a practice that is well established in preschool and primary classrooms. Many benefits of this pedagogical activity have been highlighted in multiple studies. However, it has also been shown that teachers are not keen on choosing informational texts for their read aloud, as their selections for this venue are mainly fictional stories, mostly written in a unique narrative story-like structure. Considering that students soon have to read complex informational texts by themselves as they go from one grade to another, there is cause for concern because those who do not benefit from an early exposure to informational texts could be lacking knowledge of informational text structures that they will encounter regularly in their reading. Exposing students to informational texts could be done in different ways in classrooms. However, since read aloud appears to be such a common and efficient practice in preschool and primary grades, it is important to examine more deeply the factors taken into account by teachers when they are selecting their readings for this important teaching activity. Moreover, it seems critical to know why teachers are not inclined to choose more often informational texts when they are reading aloud to their pupils. A group of 22 preschool or primary grade teachers participated in this study. The data collection was done by a survey and an individual semi-structured interview. The survey was conducted in order to get quantitative data on the read-aloud practices of teachers. As for the interviews, they were organized around three categories of questions (exploratory, analytical, opinion) regarding the process of selecting the texts for the read-aloud sessions. A statistical analysis was conducted on the data obtained by the survey. As for the interviews, they were subjected to a content analysis aiming to classify the information collected in predetermined categories such as the reasons given to favor fictional texts over informative texts, the reasons given for avoiding informative texts for reading aloud, the perceptions of the challenges that the informative texts could bring when they are read aloud to students, and the perceived advantages that they would present if they were chosen more often for this activity. Results are showing variable factors that are guiding the teachers when they are making their selection of the texts to be read aloud. As for example, some of them are choosing solely fictional texts because of their convictions that these are more interesting for their students. They also perceive that the informational texts are not good choices because they are not suitable for pleasure reading. In that matter, results are pointing to some interesting elements. Many teachers perceive that read aloud of fictional or informational texts have different goals: fictional texts are read for pleasure and informational texts are read mostly for academic purposes. These results bring out the urgency for teachers to become aware of the numerous benefits that the reading aloud of each type of texts could bring to their students, especially the informational texts. The possible consequences of teachers’ perceptions will be discussed further in our presentation.

Keywords: fictional texts, informational texts, preschool or primary grade teachers, reading aloud

Procedia PDF Downloads 122
459 Heat Transfer Dependent Vortex Shedding of Thermo-Viscous Shear-Thinning Fluids

Authors: Markus Rütten, Olaf Wünsch

Abstract:

Non-Newtonian fluid properties can change the flow behaviour significantly, its prediction is more difficult when thermal effects come into play. Hence, the focal point of this work is the wake flow behind a heated circular cylinder in the laminar vortex shedding regime for thermo-viscous shear thinning fluids. In the case of isothermal flows of Newtonian fluids the vortex shedding regime is characterised by a distinct Reynolds number and an associated Strouhal number. In the case of thermo-viscous shear thinning fluids the flow regime can significantly change in dependence of the temperature of the viscous wall of the cylinder. The Reynolds number alters locally and, consequentially, the Strouhal number globally. In the present CFD study the temperature dependence of the Reynolds and Strouhal number is investigated for the flow of a Carreau fluid around a heated cylinder. The temperature dependence of the fluid viscosity has been modelled by applying the standard Williams-Landel-Ferry (WLF) equation. In the present simulation campaign thermal boundary conditions have been varied over a wide range in order to derive a relation between dimensionless heat transfer, Reynolds and Strouhal number. Together with the shear thinning due to the high shear rates close to the cylinder wall this leads to a significant decrease of viscosity of three orders of magnitude in the nearfield of the cylinder and a reduction of two orders of magnitude in the wake field. Yet the shear thinning effect is able to change the flow topology: a complex K´arm´an vortex street occurs, also revealing distinct characteristic frequencies associated with the dominant and sub-dominant vortices. Heating up the cylinder wall leads to a delayed flow separation and narrower wake flow, giving lesser space for the sequence of counter-rotating vortices. This spatial limitation does not only reduce the amplitude of the oscillating wake flow it also shifts the dominant frequency to higher frequencies, furthermore it damps higher harmonics. Eventually the locally heated wake flow smears out. Eventually, the CFD simulation results of the systematically varied thermal flow parameter study have been used to describe a relation for the main characteristic order parameters.

Keywords: heat transfer, thermo-viscous fluids, shear thinning, vortex shedding

Procedia PDF Downloads 280
458 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 318
457 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 190
456 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 104
455 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop

Procedia PDF Downloads 307
454 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading

Authors: Binger Lu

Abstract:

It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.

Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading

Procedia PDF Downloads 112
453 Ionic Liquids-Polymer Nanoparticle Systems as Breakthrough Tools to Improve the Leprosy Treatment

Authors: A. Julio, R. Caparica, S. Costa Lima, S. Reis, J. G. Costa, P. Fonte, T. Santos De Almeida

Abstract:

The Mycobacterium leprae causes a chronic and infectious disease called leprosy, which the most common symptoms are peripheral neuropathy and deformation of several parts of the body. The pharmacological treatment of leprosy is a combined therapy with three different drugs, rifampicin, clofazimine, and dapsone. However, clofazimine and dapsone have poor solubility in water and also low bioavailability. Thus, it is crucial to develop strategies to overcome such drawbacks. The use of ionic liquids (ILs) may be a strategy to overcome the low solubility since they have been used as solubility promoters. ILs are salts, liquid below 100 ºC or even at room temperature, that may be placed in water, oils or hydroalcoholic solutions. Another approach may be the encapsulation of drugs into polymeric nanoparticles, which improves their bioavailability. In this study, two different classes of ILs were used, the imidazole- and the choline-based ionic liquids, as solubility enhancers of the poorly soluble antileprotic drugs. Thus, after the solubility studies, it was developed IL-PLGA nanoparticles hybrid systems to deliver such drugs. First of all, the solubility studies of clofazimine and dapsone were performed in water and in water: IL mixtures, at ILs concentrations where cell viability is maintained, at room temperature for 72 hours. For both drugs, it was observed an improvement on the drug solubility and [Cho][Phe] showed to be the best solubility enhancer, especially for clofazimine, where it was observed a 10-fold improvement. Later, it was produced nanoparticles, with a polymeric matrix of poly(lactic-co-glycolic acid) (PLGA) 75:25, by a modified solvent-evaporation W/O/W double emulsion technique in the presence of [Cho][Phe]. Thus, the inner phase was an aqueous solution of 0.2 % (v/v) of the above IL with each drug to its maximum solubility determined on the previous study. After the production, the nanosystem hybrid was physicochemically characterized. The produced nanoparticles had a diameter of around 580 nm and 640 nm, for clofazimine and dapsone, respectively. Regarding the polydispersity index, it was in agreement of the recommended value of this parameter for drug delivery systems (around 0.3). The association efficiency (AE) of the developed hybrid nanosystems demonstrated promising AE values for both drugs, given their low solubility (64.0 ± 4.0 % for clofazimine and 58.6 ± 10.0 % for dapsone), that prospects the capacity of these delivery systems to enhance the bioavailability and loading of clofazimine and dapsone. Overall, the study achievement may signify an upgrading of the patient’s quality of life, since it may mean a change in the therapeutic scheme, not requiring doses of drug so high to obtain a therapeutic effect. The authors would like to thank Fundação para a Ciência e a Tecnologia, Portugal (FCT/MCTES (PIDDAC), UID/DTP/04567/2016-CBIOS/PRUID/BI2/2018).

Keywords: ionic liquids, ionic liquids-PLGA nanoparticles hybrid systems, leprosy treatment, solubility

Procedia PDF Downloads 122
452 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 134
451 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 98
450 Role of Grey Scale Ultrasound Including Elastography in Grading the Severity of Carpal Tunnel Syndrome - A Comparative Cross-sectional Study

Authors: Arjun Prakash, Vinutha H., Karthik N.

Abstract:

BACKGROUND: Carpal tunnel syndrome (CTS) is a common entrapment neuropathy with an estimated prevalence of 0.6 - 5.8% in the general adult population. It is caused by compression of the Median Nerve (MN) at the wrist as it passes through a narrow osteofibrous canal. Presently, the diagnosis is established by the clinical symptoms and physical examination and Nerve conduction study (NCS) is used to assess its severity. However, it is considered to be painful, time consuming and expensive, with a false-negative rate between 16 - 34%. Ultrasonography (USG) is now increasingly used as a diagnostic tool in CTS due to its non-invasive nature, increased accessibility and relatively low cost. Elastography is a newer modality in USG which helps to assess stiffness of tissues. However, there is limited available literature about its applications in peripheral nerves. OBJECTIVES: Our objectives were to measure the Cross-Sectional Area (CSA) and elasticity of MN at the carpal tunnel using Grey scale Ultrasonography (USG), Strain Elastography (SE) and Shear Wave Elastography (SWE). We also made an attempt to independently evaluate the role of Gray scale USG, SE and SWE in grading the severity of CTS, keeping NCS as the gold standard. MATERIALS AND METHODS: After approval from the Institutional Ethics Review Board, we conducted a comparative cross sectional study for a period of 18 months. The participants were divided into two groups. Group A consisted of 54 patients with clinically diagnosed CTS who underwent NCS, and Group B consisted of 50 controls without any clinical symptoms of CTS. All Ultrasound examinations were performed on SAMSUNG RS 80 EVO Ultrasound machine with 2 - 9 Mega Hertz linear probe. In both groups, CSA of the MN was measured on Grey scale USG, and its elasticity was measured at the carpal tunnel (in terms of Strain ratio and Shear Modulus). The variables were compared between both groups by using ‘Independent t test’, and subgroup analyses were performed using one-way analysis of variance. Receiver operating characteristic curves were used to evaluate the diagnostic performance of each variable. RESULTS: The mean CSA of the MN was 13.60 + 3.201 mm2 and 9.17 + 1.665 mm2 in Group A and Group B, respectively (p < 0.001). The mean SWE was 30.65 + 12.996 kPa and 17.33 + 2.919 kPa in Group A and Group B, respectively (p < 0.001), and the mean Strain ratio was 7.545 + 2.017 and 5.802 + 1.153 in Group A and Group B respectively (p < 0.001). CONCLUSION: The combined use of Gray scale USG, SE and SWE is extremely useful in grading the severity of CTS and can be used as a painless and cost-effective alternative to NCS. Early diagnosis and grading of CTS and effective treatment is essential to avoid permanent nerve damage and functional disability.

Keywords: carpal tunnel, ultrasound, elastography, nerve conduction study

Procedia PDF Downloads 63
449 Comparative Studies and Optimization of Biodiesel Production from Oils of Selected Seeds of Nigerian Origin

Authors: Ndana Mohammed, Abdullahi Musa Sabo

Abstract:

The oils used in this work were extracted from seeds of Ricinuscommunis, Heaveabrasiliensis, Gossypiumhirsutum, Azadirachtaindica, Glycin max and Jatrophacurcasby solvent extraction method using n-hexane, and gave the yield of 48.00±0.00%, 44.30±0.52%, 45.50±0.64%, 47.60±0.51%, 41.50±0.32% and 46.50±0.71% respectively. However these feed stocks are highly challenging to trans-esterification reaction because they were found to contain high amount of free fatty acids (FFA) (6.37±0.18, 17.20±0.00, 6.14±0.05, 8.60±0.14, 5.35±0.07, 4.24±0.02mgKOH/g) in order of the above. As a result, two-stage trans-esterification reactions process was used to produce biodiesel; Acid esterification was used to reduce high FFA to 1% or less, and the second stage involve the alkaline trans-esterification/optimization of process condition to obtain high yield quality biodiesel. The salient features of this study include; characterization of oils using AOAC, AOCS standard methods to reveal some properties that may determine the viability of sample seeds as potential feed stocks for biodiesel production, such as acid value, saponification value, Peroxide value, Iodine value, Specific gravity, Kinematic viscosity, and free fatty acid profile. The optimization of process parameters in biodiesel production was investigated. Different concentrations of alkaline catalyst (KOH) (0.25, 0.5, 0.75, 1.0 and 1.50w/v, methanol/oil molar ratio (3:1, 6:1, 9:1, 12:1, and 15:1), reaction temperature (500 C, 550 C, 600 C, 650 C, 700 C), and the rate of stirring (150 rpm,225 rpm,300 rpm and 375 rpm) were used for the determination of optimal condition at which maximum yield of biodiesel would be obtained. However, while optimizing one parameter other parameters were kept fixed. The result shows the optimal biodiesel yield at a catalyst concentration of 1%, methanol/oil molar ratio of 6:1, except oil from ricinuscommunis which was obtained at 9:1, the reaction temperature of 650 C was observed for all samples, similarly the stirring rate of 300 rpm was also observed for all samples except oil from ricinuscommunis which was observed at 375 rpm. The properties of biodiesel fuel were evaluated and the result obtained conformed favorably to ASTM and EN standard specifications for fossil diesel and biodiesel. Therefore biodiesel fuel produced can be used as substitute for fossil diesel. The work also reports the result of the study on the evaluation of the effect of the biodiesel storage on its physicochemical properties to ascertain the level of deterioration with time. The values obtained for the entire samples are completely out of standard specification for biodiesel before the end of the twelve months test period, and are clearly degraded. This suggests the biodiesels from oils of Ricinuscommunis, Heaveabrasiliensis, Gossypiumhirsutum, Azadirachtaindica, Glycin max and Jatrophacurcascannot be stored beyond twelve months.

Keywords: biodiesel, characterization, esterification, optimization, transesterification

Procedia PDF Downloads 395
448 Coping Strategies among Caregivers of Children with Autism Spectrum Disorders: A Cluster Analysis

Authors: Noor Ismael, Lisa Mische Lawson, Lauren Little, Murad Moqbel

Abstract:

Background/Significance: Caregivers of children with Autism Spectrum Disorders (ASD) develop coping mechanisms to overcome daily challenges to successfully parent their child. There is variability in coping strategies used among caregivers of children with ASD. Capturing homogeneity among such variable groups may help elucidate targeted intervention approaches for caregivers of children with ASD. Study Purpose: This study aimed to identify groups of caregivers of children with ASD based on coping mechanisms, and to examine whether there are differences among these groups in terms of strain level. Methods: This study utilized a secondary data analysis, and included survey responses of 273 caregivers of children with ASD. Measures consisted of the COPE Inventory and the Caregiver Strain Questionnaire. Data analyses consisted of cluster analysis to group caregiver coping strategies, and analysis of variance to compare the caregiver coping groups on strain level. Results: Cluster analysis results showed four distinct groups with different combinations of coping strategies: Social-Supported/Planning (group one), Spontaneous/Reactive (group two), Self-Supporting/Reappraisal (group three), and Religious/Expressive (group four). Caregivers in group one (Social-Supported/Planning) demonstrated significantly higher levels than the remaining three groups in the use of the following coping strategies: planning, use of instrumental social support, and use of emotional social support, relative to the other three groups. Caregivers in group two (Spontaneous/Reactive) used less restraint relative to the other three groups, and less suppression of competing activities relative to the other three groups as coping strategies. Also, group two showed significantly lower levels of religious coping as compared to the other three groups. In contrast to group one, caregivers in group three (Self-Supporting/Reappraisal) demonstrated significantly lower levels of the use of instrumental social support and the use of emotional social support relative to the other three groups. Additionally, caregivers in group three showed more acceptance, positive reinterpretation and growth coping strategies. Caregivers in group four (Religious/Expressive) demonstrated significantly higher levels of religious coping relative to the other three groups and utilized more venting of emotions strategies. Analysis of Variance results showed no significant differences between the four groups on the strain scores. Conclusions: There are four distinct groups with different combinations of coping strategies: Social-Supported/Planning, Spontaneous/Reactive, Self-Supporting/Reappraisal, and Religious/Expressive. Each caregiver group engaged in a combination of coping strategies to overcome the strain of caregiving.

Keywords: autism, caregivers, cluster analysis, coping strategies

Procedia PDF Downloads 254
447 Expression of CASK Antibody in Non-Mucionus Colorectal Adenocarcinoma and Its Relation to Clinicopathological Prognostic Factors

Authors: Reham H. Soliman, Noha Noufal, Howayda AbdelAal

Abstract:

Calcium/calmodulin-dependent serine protein kinase (CASK) belongs to the membrane-associated guanylate kinase (MAGUK) family and has been proposed as a mediator of cell-cell adhesion and proliferation, which can contribute to tumorogenesis. CASK has been linked as a good prognostic factor with some tumor subtypes, while considered as a poor prognostic marker in others. To our knowledge, no sufficient evidence of CASK role in colorectal cancer is available. The aim of this study is to evaluate the expression of Calcium/calmodulin-dependent serine protein kinase (CASK) in non-mucinous colorectal adenocarcinoma and adenomatous polyps as precursor lesions and assess its prognostic significance. The study included 42 cases of conventional colorectal adenocarcinoma and 15 biopsies of adenomatous polyps with variable degrees of dysplasia. They were reviewed for clinicopathological prognostic factors and stained by CASK; mouse, monoclonal antibody using heat-induced antigen retrieval immunohistochemical techniques. The results showed that CASK protein was significantly overexpressed (p <0.05) in CRC compared with adenoma samples. The CASK protein was overexpressed in the majority of CRC samples with 85.7% of cases showing moderate to strong expression, while 46.7% of adenomas were positive. CASK overexpression was significantly correlated with both TNM stage and grade of differentiation (p <0.05). There was a significantly higher expression in tumor samples with early stages (I/II) rather than advanced stage (III/IV) and with low grade (59.5%) rather than high grade (40.5%). Another interesting finding was found among the adenomas group, where the stronger intensity of staining was observed in samples with high grade dysplasia (33.3%) than those of lower grades (13.3%). In conclusion, this study shows that there is significant overexpression of CASK protein in CRC as well as in adenomas with high grade dysplasia. This indicates that CASK is involved in the process of carcinogenesis and functions as a potential trigger of the adenoma-carcinoma cascade. CASK was significantly overexpressed in early stage and low-grade tumors rather than tumors with advanced stage and higher histological grades. This suggests that CASK protein is a good prognostic factor. We suggest that CASK affects CRC in two different ways derived from its physiology. CASK as part of MAGUK family can stimulate proliferation and through its cell membrane localization and as a mediator of cell-cell adhesion might contribute in tumor confinement and localization.

Keywords: CASK, colorectal cancer, overexpression, prognosis

Procedia PDF Downloads 257
446 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 195
445 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh

Abstract:

The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.

Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese

Procedia PDF Downloads 53
444 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 354
443 Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay

Authors: Sharifullah Ahmed, Sarwar Jahan Md. Yasin

Abstract:

Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.

Keywords: design charts, ground improvement, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay

Procedia PDF Downloads 102
442 Quality of Life of Elderly and Factors Associated in Bharatpur Metropolitan City, Chitwan: A Mixed Method Study

Authors: Rubisha Adhikari, Rajani Shah

Abstract:

Introduction: Aging is a natural, global and inevitable phenomenon every single person has to go through, and nobody can escape the process. One of the emerging challenges to public health is to improve the quality of later years of life as life expectancy continues to increase. Quality of life (QoL) has grown to be a key goal for many public health initiatives. Population aging has become a global phenomenon as they are growing more quickly in emerging nations than they are in industrialized nations, leaving minimal opportunities to regulate the consequences of the demographic shift. Methods: A community-based descriptive analytical approach was used to examine the quality of life and associated factors among elderly people. A mixed method was chosen for the study. For the quantitative data collection, a household survey was conducted using the WHOQOL-OLD tool. In-depth interviews were conducted among twenty participants for qualitative data collection. Data generated through in-depth interviews were transcribed verbatim. In-depth interviews lasted about an hour and were audio recorded. The in-depth interview guide had been developed by the research team and pilot-tested before actual interviews. Results: This study result showed the association between quality of life and socio-demographic variables. Among all the variables under socio-demographic variable of this study, age (ꭓ2=14.445, p=0.001), gender (ꭓ2=14.323, p=<0.001), marital status (ꭓ2=10.816, p=0.001), education status (ꭓ2=23.948, p=<0.001), household income (ꭓ2=13.493, p=0.001), personal income (ꭓ2=14.129, p=0.001), source of personal income (ꭓ2=28.332,p=<0.001), social security allowance (ꭓ2=18.005,p=<0.001), alcohol consumption (ꭓ2=9.397,p=0.002) are significantly associated with quality of life of elderly. In addition, affordability (ꭓ2=12.088, p=0.001), physical activity (ꭓ2=9.314, p=0.002), emotional support (ꭓ2=9.122, p=0.003), and economic support (ꭓ2=8.104, p=0.004) are associated with quality of life of elderly people. Conclusion: In conclusion, this mixed method study provides insight into the attributes of the quality of life of elderly people in Nepal and similar settings. As the geriatric population is growing in full swing, maintaining a high quality of life has become a major challenge. This study showed that determinants such as age, gender, marital status, education status, household income, personal income, source of personal income, social security allowance and alcohol consumption, economic support, emotional support, affordability and physical activity have an association with quality of life of the elderly.

Keywords: ageing, chitwan, elderly, health status, quality of life

Procedia PDF Downloads 31
441 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 363