Search results for: terminological variation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2422

Search results for: terminological variation

142 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 146
141 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials

Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar

Abstract:

Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.

Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)

Procedia PDF Downloads 366
140 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 111
139 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements

Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang

Abstract:

Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.

Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure

Procedia PDF Downloads 92
138 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation

Authors: Calorine Twebaze, Jesca Balinga

Abstract:

Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.

Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches

Procedia PDF Downloads 26
137 A Study for Effective CO2 Sequestration of Hydrated Cement by Direct Aqueous Carbonation

Authors: Hyomin Lee, Jinhyun Lee, Jinyeon Hwang, Younghoon Choi, Byeongseo Son

Abstract:

Global warming is a world-wide issue. Various carbon capture and storage (CCS) technologies for reducing CO2 concentration in the atmosphere have been increasingly studied. Mineral carbonation is one of promising method for CO2 sequestration. Waste cement generating from aggregate recycling processes of waste concrete is potentially a good raw material containing reactive components for mineral carbonation. The major goal of our long-term project is to developed effective methods for CO2 sequestration using waste cement. In the present study, the carbonation characteristics of hydrated cement were examined by conducting two different direct aqueous carbonation experiments. We also evaluate the influence of NaCl and MgCl2 as additives to increase mineral carbonation efficiency of hydrated cement. Cement paste was made with W:C= 6:4 and stored for 28 days in water bath. The prepared cement paste was pulverized to the size less than 0.15 mm. 15 g of pulverized cement paste and 200 ml of solutions containing additives were reacted in ambient temperature and pressure conditions. 1M NaCl and 0.25 M MgCl2 was selected for additives after leaching test. Two different sources of CO2 was applied for direct aqueous carbonation experiment: 0.64 M NaHCO3 was used for CO2 donor in method 1 and pure CO2 gas (99.9%) was bubbling into reacting solution at the flow rate of 20 ml/min in method 2. The pH and Ca ion concentration were continuously measured with pH/ISE Multiparameter to observe carbonation behaviors. Material characterization of reacted solids was performed by TGA, XRD, SEM/EDS analyses. The carbonation characteristics of hydrated cement were significantly different with additives. Calcite was a dominant calcium carbonate mineral after the two carbonation experiments with no additive and NaCl additive. The significant amount of aragonite and vaterite as well as very fine calcite of poorer crystallinity was formed with MgCl2 additive. CSH (calcium silicate hydrate) in hydrated cement were changed to MSH (magnesium silicate hydrate). This transformation contributed to the high carbonation efficiency. Carbonation experiment with method 1 revealed that that the carbonation of hydrated cement took relatively long time in MgCl2 solution compared to that in NaCl solution and the contents of aragonite and vaterite were increased as increasing reaction time. In order to maximize carbonation efficiency in direct aqueous carbonation with CO2 gas injection (method 2), the control of solution pH was important. The solution pH was decreased with injection of CO2 gas. Therefore, the carbonation efficiency in direct aqueous carbonation was closely related to the stability of calcium carbonate minerals with pH changes. With no additive and NaCl additive, the maximum carbonation was achieved when the solution pH was greater than 11. Calcium carbonate form by mineral carbonation seemed to be re-dissolved as pH decreased below 11 with continuous CO2 gas injection. The type of calcium carbonate mineral formed during carbonation in MgCl2 solution was closely related to the variation of solution pH caused by CO2 gas injection. The amount of aragonite significantly increased with decreasing solution pH, whereas the amount of calcite decreased.

Keywords: CO2 sequestration, Mineral carbonation, Cement and concrete, MgCl2 and NaCl

Procedia PDF Downloads 353
136 Unscrupulous Intermediaries in International Labour Migration of Nepal

Authors: Anurag Devkota

Abstract:

Foreign employment serves to be the strongest pillar in engendering employment options for a large number of the young Nepali population. Nepali workers are forced to leave the comfort of their homes and are exposed to precarious conditions while on a journey to earn enough money to live better their lives. The exponential rise in foreign labour migration has produced a snowball effect on the economy of the nation. The dramatic variation in the economic development of the state has proved to establish the fact that migration is increasingly significant for livelihood, economic development, political stability, academic discourse and policy planning in Nepal. The foreign employment practice in Nepal largely incorporates the role of individual agents in the entire process of migration. With the fraudulent acts and false promises of these agents, the problems associated with every Nepali migrant worker starts at home. The workers encounter tremendous pre-departure malpractice and exploitation at home by different individual agents during different stages of processing. Although these epidemic and repetitive ill activities of intermediaries are dominant and deeply rooted, the agents have been allowed to walk free in the absence of proper laws to curb their wrongdoings and misconduct. It has been found that the existing regulatory mechanisms have not been utilised to their full efficacy and often fall short in addressing the actual concerns of the workers because of the complex legal and judicial procedures. Structural changes in the judicial setting will help bring perpetrators under the law and victims towards access to justice. Thus, a qualitative improvement of the overall situation of Nepali migrant workers calls for a proper 'regulatory' arrangement vis-à-vis these brokers. Hence, the author aims to carry out a doctrinal study using reports and scholarly articles as a major source of data collection. Various reports published by different non-governmental and governmental organizations working in the field of labour migration will be examined and the research will focus on the inductive and deductive data analysis. Hence, the real challenge of establishing a pro-migrant worker regime in recent times is to bring the agents under the jurisdiction of the court in Nepal. The Gulf Visit Study Report, 2017 prepared and launched by the International Relation and Labour Committee of Legislature-Parliament of Nepal finds that solving the problems at home solves 80 percent of the problems concerning migrant workers in Nepal. Against this backdrop, this research study is intended to determine the ways and measures to curb the role of agents in the foreign employment and labour migration process of Nepal. It will further dig deeper into the regulatory mechanisms of Nepal and map out essential determinant behind the impunity of agents.

Keywords: foreign employment, labour migration, human rights, migrant workers

Procedia PDF Downloads 99
135 Development and Testing of an Instrument to Measure Beliefs about Cervical Cancer Screening among Women in Botswana

Authors: Ditsapelo M. McFarland

Abstract:

Background: Despite the availability of the Pap smear services in urban areas in Botswana, most women in such areas do not seem to screen regular for prevention of the cervical cancer disease. Reasons for non-use of the available Pap smear services are not well understood. Beliefs about cancer may influence participation in cancer screening in these women. The purpose of this study was to develop an instrument to measure beliefs about cervical cancer and Pap smear screening among Black women in Botswana, and evaluate the psychometric properties of the instrument. Significance: Instruments that are designed to measure beliefs about cervical cancer and screening among black women in Botswana, as well as in the surrounding region, are presently not available. Valid and reliable instruments are needed for exploration of the women’s beliefs about cervical cancer. Conceptual Framework: The Health Belief Model (HBM) provided a conceptual framework for the study. Methodology: The study was done in four phases: Phase 1: item generation: 15 items were generated from literature review and qualitative data for each of four conceptually defined HBM constructs: Perceived susceptibility, severity, benefits, and barriers (Version 1). Phase 2: content validity: Four experts who were advanced practice nurses of African descent and were familiar with the content and the HBM evaluated the content. Experts rated the items on a 4-point Likert scale ranging from: 1=not relevant, 2=somewhat relevant, 3=relevant and 4=very relevant. Fifty-five items were retained for instrument development: perceived susceptibility - 11, severity - 14, benefits - 15 and barriers - 15, all measuring on a 4-point Likert scale ranging from strongly disagree (1) to strongly agree (4). (Version 2). Phase 3: pilot testing: The instrument was pilot tested on a convenient sample of 30 women in Botswana and revised as needed. Phase 4: reliability: the revised instrument (Version 3) was submitted to a larger sample of women in Botswana (n=300) for reliability testing. The sample included women who were Batswana by birth and decent, were aged 30 years and above and could complete an English questionnaire. Data were collected with the assistance of trained research assistants. Major findings: confirmatory factor analysis of the 55 items found that a number of items did not adequately load in a four-factor solution. Items that exhibited reasonable reliability and had low frequency of missing values (n=36) were retained: perceived barriers (14 items), perceived benefits (8 items), perceived severity (4 items), and perceived susceptibility (10 items). confirmatory factor analysis (principle components) for a four factor solution using varimax rotation demonstrated that these four factors explained 43% of the variation in these 36 items. Conclusion: reliability analysis using Cronbach’s Alpha gave generally satisfactory results with values from 0.53 to 0.89.

Keywords: cervical cancer, factor analysis, psychometric evaluation, varimax rotation

Procedia PDF Downloads 104
134 Transient Heat Transfer: Experimental Investigation near the Critical Point

Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut

Abstract:

In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.

Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators

Procedia PDF Downloads 199
133 Positron Emission Tomography Parameters as Predictors of Pathologic Response and Nodal Clearance in Patients with Stage IIIA NSCLC Receiving Trimodality Therapy

Authors: Andrea L. Arnett, Ann T. Packard, Yolanda I. Garces, Kenneth W. Merrell

Abstract:

Objective: Pathologic response following neoadjuvant chemoradiation (CRT) has been associated with improved overall survival (OS). Conflicting results have been reported regarding the pathologic predictive value of positron emission tomography (PET) response in patients with stage III lung cancer. The aim of this study was to evaluate the correlation between post-treatment PET response and pathologic response utilizing novel FDG-PET parameters. Methods: This retrospective study included patients with non-metastatic, stage IIIA (N2) NSCLC cancer treated with CRT followed by resection. All patients underwent PET prior to and after neoadjuvant CRT. Univariate analysis was utilized to assess correlations between PET response, nodal clearance, pCR, and near-complete pathologic response (defined as the microscopic residual disease or less). Maximal standard uptake value (SUV), standard uptake ratio (SUR) [normalized independently to the liver (SUR-L) and blood pool (SUR-BP)], metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were measured pre- and post-chemoradiation. Results: A total of 44 patients were included for review. Median age was 61.9 years, and median follow-up was 2.6 years. Histologic subtypes included adenocarcinoma (72.2%) and squamous cell carcinoma (22.7%), and the majority of patients had the T2 disease (59.1%). The rate of pCR and near-complete pathologic response within the primary lesion was 28.9% and 44.4%, respectively. The average reduction in SUVmₐₓ was 9.2 units (range -1.9-32.8), and the majority of patients demonstrated some degree of favorable treatment response. SUR-BP and SUR-L showed a mean reduction of 4.7 units (range -0.1-17.3) and 3.5 units (range –1.7-12.6), respectively. Variation in PET response was not significantly associated with histologic subtype, concurrent chemotherapy type, stage, or radiation dose. No significant correlation was found between pathologic response and absolute change in MTV or TLG. Reduction in SUVmₐₓ and SUR were associated with increased rate of pathologic response (p ≤ 0.02). This correlation was not impacted by normalization of SUR to liver versus mediastinal blood pool. A threshold of > 75% decrease in SUR-L correlated with near-complete response, with a sensitivity of 57.9% and specificity of 85.7%, as well as positive and negative predictive values of 78.6% and 69.2%, respectively (diagnostic odds ratio [DOR]: 5.6, p=0.02). A threshold of >50% decrease in SUR was also significantly associated pathologic response (DOR 12.9, p=0.2), but specificity was substantially lower when utilizing this threshold value. No significant association was found between nodal PET parameters and pathologic nodal clearance. Conclusions: Our results suggest that treatment response to neoadjuvant therapy as assessed on PET imaging can be a predictor of pathologic response when evaluated via SUV and SUR. SUR parameters were associated with higher diagnostic odds ratios, suggesting improved predictive utility compared to SUVmₐₓ. MTV and TLG did not prove to be significant predictors of pathologic response but may warrant further investigation in a larger cohort of patients.

Keywords: lung cancer, positron emission tomography (PET), standard uptake ratio (SUR), standard uptake value (SUV)

Procedia PDF Downloads 208
132 Subtropical Potential Vorticity Intrusion Drives Increasing Tropospheric Ozone over the Tropical Central Pacific

Authors: Debashis Nath

Abstract:

Drawn from multiple reanalysis datasets, an increasing trend and westward shift in the number of Potential Vorticity (PV) intrusion events over the Pacific are evident. The increased frequency can be linked to a long-term trend in upper tropospheric (UT, 200 hPa) equatorial westerly wind and subtropical jets (STJ) during boreal winter to spring. These may be resulting from anomalous warming and cooling over the western Pacific warm pool and the tropical eastern Pacific, respectively. The intrusions brought dry and ozone rich air of stratospheric origin deep into the tropics. In the tropical UT, interannual ozone variability is mainly related to convection associated with El Niño/Southern Oscillation. Zonal mean stratospheric overturning circulation organizes the transport of ozone rich air poleward and downward to the high and midlatitudes leading there to higher ozone concentration. In addition to these well described mechanisms, we observe a long-term increasing trend in ozone flux over the northern hemispheric outer tropical (10–25°N) central Pacific that results from equatorward transport and downward mixing from the midlatitude UT and lower stratosphere (LS) during PV intrusions. This increase in tropospheric ozone flux over the Pacific Ocean may affect the radiative processes and changes the budget of atmospheric hydroxyl radicals. The results demonstrate a long-term increase in outer tropical Pacific PV intrusions linked with the strengthening of the upper tropospheric equatorial westerlies and weakening of the STJ. Zonal variation in SST, characterized by gradual warming in the western Pacific–warm pool and cooling in the central–eastern Pacific, is associated with the strengthening of the Pacific Walker circulation. In the Western Pacific enhanced convective activity leads to precipitation, and the latent heat released in the process strengthens the Pacific Walker circulation. However, it is linked with the trend in global mean temperature, which is related to the emerging anthropogenic greenhouse signal and negative phase of PDO. On the other hand, the central-eastern Pacific cooling trend is linked to the weakening of the central–eastern Pacific Hadley circulation. It suppresses the convective activity due to sinking air motion and imports less angular momentum to the STJ leading to a weakened STJ. While, more PV intrusions result from this weaker STJ on its equatorward side; significantly increase the stratosphere-troposphere exchange processes on the longer timescale. This plays an important role in determining the atmospheric composition, particularly of tropospheric ozone, in the northern outer tropical central Pacific. It may lead to more ozone of stratospheric origin in the LT and even in the marine boundary, which may act as harmful pollutants and affect the radiative processes by changing the global budgets of atmospheric hydroxyl radicals.

Keywords: PV intrusion, westerly duct, ozone, Central Pacific

Procedia PDF Downloads 213
131 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group

Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb

Abstract:

Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.

Keywords: broadband services, customer experience quality, loyalty, net promoters score

Procedia PDF Downloads 245
130 The Influence of Microsilica on the Cluster Cracks' Geometry of Cement Paste

Authors: Maciej Szeląg

Abstract:

The changing nature of environmental impacts, in which cement composites are operating, are causing in the structure of the material a number of phenomena, which result in volume deformation of the composite. These strains can cause composite cracking. Cracks are merging by propagation or intersect to form a characteristic structure of cracks known as the cluster cracks. This characteristic mesh of cracks is crucial to almost all building materials, which are working in service loads conditions. Particularly dangerous for a cement matrix is a sudden load of elevated temperature – the thermal shock. Resulting in a relatively short period of time a large value of a temperature gradient between the outer surface and the material’s interior can result in cracks formation on the surface and in the volume of the material. In the paper, in order to analyze the geometry of the cluster cracks of the cement pastes, the image analysis tools were used. Tested were 4 series of specimens made of two different Portland cement. In addition, two series include microsilica as a substitute for the 10% of the cement. Within each series, specimens were performed in three w/b indicators (water/binder): 0.4; 0.5; 0.6. The cluster cracks were created by sudden loading the samples by elevated temperature of 250°C. Images of the cracked surfaces were obtained via scanning at 2400 DPI. Digital processing and measurements were performed using ImageJ v. 1.46r software. To describe the structure of the cluster cracks three stereological parameters were proposed: the average cluster area - A ̅, the average length of cluster perimeter - L ̅, and the average opening width of a crack between clusters - I ̅. The aim of the study was to identify and evaluate the relationships between measured stereological parameters, and the compressive strength and the bulk density of the modified cement pastes. The tests of the mechanical and physical feature have been carried out in accordance with EN standards. The curves describing the relationships have been developed using the least squares method, and the quality of the curve fitting to the empirical data was evaluated using three diagnostic statistics: the coefficient of determination – R2, the standard error of estimation - Se, and the coefficient of random variation – W. The use of image analysis allowed for a quantitative description of the cluster cracks’ geometry. Based on the obtained results, it was found a strong correlation between the A ̅ and L ̅ – reflecting the fractal nature of the cluster cracks formation process. It was noted that the compressive strength and the bulk density of cement pastes decrease with an increase in the values of the stereological parameters. It was also found that the main factors, which impact on the cluster cracks’ geometry are the cement particles’ size and the general content of the binder in a volume of the material. The microsilica caused the reduction in the A ̅, L ̅ and I ̅ values compared to the values obtained by the classical cement paste’s samples, which is caused by the pozzolanic properties of the microsilica.

Keywords: cement paste, cluster cracks, elevated temperature, image analysis, microsilica, stereological parameters

Procedia PDF Downloads 226
129 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ

Authors: Lalita, Niladri Sarkar, Subhasis Ghosh

Abstract:

Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.

Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal

Procedia PDF Downloads 34
128 Management of Dysphagia after Supra Glottic Laryngectomy

Authors: Premalatha B. S., Shenoy A. M.

Abstract:

Background: Rehabilitation of swallowing is as vital as speech in surgically treated head and neck cancer patients to maintain nutritional support, enhance wound healing and improve quality of life. Aspiration following supraglottic laryngectomy is very common, and rehabilitation of the same is crucial which requires involvement of speech therapist in close contact with head and neck surgeon. Objectives: To examine the functions of swallowing outcomes after intensive therapy in supraglottic laryngectomy. Materials: Thirty-nine supra glottic laryngectomees were participated in the study. Of them, 36 subjects were males and 3 were females, in the age range of 32-68 years. Eighteen subjects had undergone standard supra glottis laryngectomy (Group1) for supraglottic lesions where as 21 of them for extended supraglottic laryngectomy (Group 2) for base tongue and lateral pharyngeal wall lesion. Prior to surgery visit by speech pathologist was mandatory to assess the sutability for surgery and rehabilitation. Dysphagia rehabilitation started after decannulation of tracheostoma by focusing on orientation about anatomy, physiological variation before and after surgery, which was tailor made for each individual based on their type and extent of surgery. Supraglottic diet - Soft solid with supraglottic swallow method was advocated to prevent aspiration. The success of intervention was documented as number of sessions taken to swallow different food consistency and also percentage of subjects who achieved satisfactory swallow in terms of number of weeks in both the groups. Results: Statistical data was computed in two ways in both the groups 1) to calculate percentage (%) of subjects who swallowed satisfactorily in the time frame of less than 3 weeks to more than 6 weeks, 2) number of sessions taken to swallow without aspiration as far as food consistency was concerned. The study indicated that in group 1 subjects of standard supraglottic laryngectomy, 61% (n=11) of them were successfully rehabilitated but their swallowing normalcy was delayed by an average 29th post operative day (3-6 weeks). Thirty three percentages (33%) (n=6) of the subjects could swallow satisfactorily without aspiration even before 3 weeks and only 5 % (n=1) of the needed more than 6 weeks to achieve normal swallowing ability. Group 2 subjects of extended SGL only 47 %( n=10) of them could achieved satisfactory swallow by 3-6 weeks and 24% (n=5) of them of them achieved normal swallowing ability before 3 weeks. Around 4% (n=1) needed more than 6 weeks and as high as 24 % (n=5) of them continued to be supplemented with naso gastric feeding even after 8-10 months post operative as they exhibited severe aspiration. As far as type of food consistencies were concerned group 1 subject could able to swallow all types without aspiration much earlier than group 2 subjects. Group 1 needed only 8 swallowing therapy sessions for thickened soft solid and 15 sessions for liquids whereas group 2 required 14 sessions for soft solid and 17 sessions for liquids to achieve swallowing normalcy without aspiration. Conclusion: The study highlights the importance of dysphagia intervention in supraglottic laryngectomees by speech pathologist.

Keywords: dysphagia management, supraglotic diet, supraglottic laryngectomy, supraglottic swallow

Procedia PDF Downloads 211
127 Unifying RSV Evolutionary Dynamics and Epidemiology Through Phylodynamic Analyses

Authors: Lydia Tan, Philippe Lemey, Lieselot Houspie, Marco Viveen, Darren Martin, Frank Coenjaerts

Abstract:

Introduction: Human respiratory syncytial virus (hRSV) is the leading cause of severe respiratory tract infections in infants under the age of two. Genomic substitutions and related evolutionary dynamics of hRSV are of great influence on virus transmission behavior. The evolutionary patterns formed are due to a precarious interplay between the host immune response and RSV, thereby selecting the most viable and less immunogenic strains. Studying genomic profiles can teach us which genes and consequent proteins play an important role in RSV survival and transmission dynamics. Study design: In this study, genetic diversity and evolutionary rate analysis were conducted on 36 RSV subgroup B whole genome sequences and 37 subgroup A genome sequences. Clinical RSV isolates were obtained from nasopharyngeal aspirates and swabs of children between 2 weeks and 5 years old of age. These strains, collected during epidemic seasons from 2001 to 2011 in the Netherlands and Belgium by either conventional or 454-sequencing. Sequences were analyzed for genetic diversity, recombination events, synonymous/non-synonymous substitution ratios, epistasis, and translational consequences of mutations were mapped to known 3D protein structures. We used Bayesian statistical inference to estimate the rate of RSV genome evolution and the rate of variability across the genome. Results: The A and B profiles were described in detail and compared to each other. Overall, the majority of the whole RSV genome is highly conserved among all strains. The attachment protein G was the most variable protein and its gene had, similar to the non-coding regions in RSV, more elevated (two-fold) substitution rates than other genes. In addition, the G gene has been identified as the major target for diversifying selection. Overall, less gene and protein variability was found within RSV-B compared to RSV-A and most protein variation between the subgroups was found in the F, G, SH and M2-2 proteins. For the F protein mutations and correlated amino acid changes are largely located in the F2 ligand-binding domain. The small hydrophobic phosphoprotein and nucleoprotein are the most conserved proteins. The evolutionary rates were similar in both subgroups (A: 6.47E-04, B: 7.76E-04 substitution/site/yr), but estimates of the time to the most recent common ancestor were much lower for RSV-B (B: 19, A: 46.8 yrs), indicating that there is more turnover in this subgroup. Conclusion: This study provides a detailed description of whole RSV genome mutations, the effect on translation products and the first estimate of the RSV genome evolution tempo. The immunogenic G protein seems to require high substitution rates in order to select less immunogenic strains and other conserved proteins are most likely essential to preserve RSV viability. The resulting G gene variability makes its protein a less interesting target for RSV intervention methods. The more conserved RSV F protein with less antigenic epitope shedding is, therefore, more suitable for developing therapeutic strategies or vaccines.

Keywords: drug target selection, epidemiology, respiratory syncytial virus, RSV

Procedia PDF Downloads 384
126 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 472
125 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 230
124 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic

Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez

Abstract:

The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.

Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis

Procedia PDF Downloads 306
123 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 85
122 Better Together: Diverging Trajectories of Local Social Work Practice and Nationally-Regulated Social Work Education in the UK

Authors: Noel Smith

Abstract:

To achieve professional registration, UK social workers need to complete a programme of education and training which meets standards set down by central government. When it comes to practice, social work in local authorities must fulfil requirements of national legislation but there is considerable local variation in the organisation and delivery of services. This presentation discusses the on-going reform of social work education by central government in the context of research of social work services in a local authority. In doing so it highlights that the ‘direction of travel’ of the national reform of social work education seems at odds with the trajectory of development of local social work services. In terms of education reform, the presentation cites key government initiatives including the knowledge and skills requirements which have been published separately for, respectively, child and family social work and adult social work. Also relevant is the Government’s new ‘teaching partnership’ pilot which focuses exclusively on social work in local government, in isolation from social work in NGOs. In terms of research, the presentation discusses two studies undertaken by Professor Smith in Suffolk County Council, a local authority in the east of England. The first is an equality impact analysis of the introduction of a new model for the delivery of adult and community services in Suffolk. This is based on qualitative research with local government representatives and NGOs involved in social work with older people and people with disabilities. The second study is an on-going, mixed method evaluation of the introduction of a new model of social care for children and young people in Suffolk. This new model is based on the international ‘Signs of Safety’ approach, which is applied in this model to a wide range of services from early intervention to child protection. While both studies are localised, the service models they examine are good illustrations of the way services are developing nationally. Analysis of these studies suggest that, if services continue to develop as they currently are, then social workers will require particular skills which are not be adequately addressed in the Government’s plans for social work education. Two issues arise. First, education reform concentrates on social work within local government while increasingly local authorities are outsourcing service provision to NGOs, expecting greater community involvement in providing care, and integrating social care with health care services. Second, education reform focuses on the different skills required for working with older and disabled adults and working with children and families, to the point where potentially the profession would be fragmented into two different classes of social worker. In contrast, the development of adult and children’s services in local authorities re-asserts the importance of common social work skills relating to personalisation, prevention and community development. The presentation highlights the importance for social work education in the UK to be forward looking, in terms of the changing design of service delivery, and outward looking, in terms of lessons to be drawn from international social work.

Keywords: adult social work, children and families social work, European social work, social work education

Procedia PDF Downloads 270
121 Cytochrome B Diversity and Phylogeny of Egyptian Sheep Breeds

Authors: Othman E. Othman, Agnés Germot, Daniel Petit, Abderrahman Maftah

Abstract:

Threats to the biodiversity are increasing due to the loss of genetic diversity within the species utilized in agriculture. Due to the progressive substitution of the less productive, locally adapted and native breeds by highly productive breeds, the number of threatened breeds is increased. In these conditions, it is more strategically important than ever to preserve as much the farm animal diversity as possible, to ensure a prompt and proper response to the needs of future generations. Mitochondrial (mtDNA) sequencing has been used to explain the origins of many modern domestic livestock species. Studies based on sequencing of sheep mitochondrial DNA showed that there are five maternal lineages in the world for domestic sheep breeds; A, B, C, D and E. Because of the eastern location of Egypt in the Mediterranean basin and the presence of fat-tailed sheep breeds- character quite common in Turkey and Syria- where genotypes that seem quite primitive, the phylogenetic studies of Egyptian sheep breeds become particularly attractive. We aimed in this work to clarify the genetic affinities, biodiversity and phylogeny of five Egyptian sheep breeds using cytochrome B sequencing. Blood samples were collected from 63 animals belonging to the five tested breeds; Barki, Rahmani, Ossimi, Saidi and Sohagi. The total DNA was extracted and the specific primer allowed the conventional PCR amplification of the cytochrome B region of mtDNA (approximately 1272 bp). PCR amplified products were purified and sequenced. The alignment of Sixty-three samples was done using BioEdit software. DnaSP 5.00 software was used to identify the sequence variation and polymorphic sites in the aligned sequences. The result showed that the presence of 34 polymorphic sites leading to the formation of 18 haplotypes. The haplotype diversity in five tested breeds ranged from 0.676 in Rahmani breed to 0.894 in Sohagi breed. The genetic distances (D) and the average number of pairwise differences (Dxy) between breeds were estimated. The lowest distance was observed between Rahmani and Saidi (D: 1.674 and Dxy: 0.00150) while the highest distance was observed between Ossimi and Sohagi (D: 5.233 and Dxy: 0.00475). Neighbour-joining (Phylogeny) tree was constructed using Mega 5.0 software. The sequences of the 63 analyzed samples were aligned with references sequences of different haplogroups. The phylogeny result showed the presence of three haplogroups (HapA, HapB and HapC) in the 63 examined samples. The other two haplogroups described in literature (HapD and HapE) were not found. The result showed that 50 out of 63 tested animals cluster with haplogroup B (79.37%) whereas 7 tested animals cluster with haplogroup A (11.11%) and 6 animals cluster with haplogroup C (9.52%). In conclusion, the phylogenetic reconstructions showed that the majority of Egyptian sheep breeds belonging to haplogroup B which is the dominant haplogroup in Eastern Mediterranean countries like Syria and Turkey. Some individuals are belonging to haplogroups A and C, suggesting that the crosses were done with other breeds for characteristic selection for growth and wool quality.

Keywords: cytochrome B, diversity, phylogheny, Egyptian sheep breeds

Procedia PDF Downloads 350
120 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 90
119 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 195
118 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes

Authors: Taylor Harris

Abstract:

Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.

Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review

Procedia PDF Downloads 68
117 Work Related Outcomes of Perceived Authentic Leadership: Moderating Role of Organizational Structures

Authors: Aisha Zubair, Anila Kamal

Abstract:

Leadership styles and practices greatly influence the organizational effectiveness and productivity. It also plays an important role in employees’ experiences of positive emotions at workplace and creative work behaviors. Authentic leadership as a newly emerging concept has been found as a significant predictor of various desirable work related outcomes. However, leadership practices and its work related outcomes, to a great extent, are determined by the very nature of the organizational structures (tall and flat). Tall organizations are characterized by multiple hierarchical layers with predominant vertical communication patterns, and narrow span of control; while flat organizations are featured by few layers of management employing both horizontal and vertical communication styles, and wide span of control. Therefore, the present study was undertaken to determine the work related outcomes of perceived authentic leadership; that is work related flow and creative work behavior among employees of flat and tall organizations. Moreover, it was also intended to determine the moderating role of organizational structure (flat and tall) in the relationship between perceived authentic leadership with work related flow and creative work behavior. In this regard, two types of companies have been considered; that is, banks as a form of tall organizational structure with multiple hierarchical structures while software companies have been considered as flat organizations with minimal layers of management. Respondents (N = 1180) were full time regular employees of marketing departments of banks (600) and software companies (580) including both men and women with age range of 22-52 years (M = 33.24; SD = 7.81). Confirmatory Factor Analysis yielded factor structures of measures of work related flow and creative work behavior in accordance to the theoretical models. However, model of authentic leadership exhibited variation in terms of two items which were not included in the final measure of the perceived authentic leadership. Results showed that perceived authentic leadership was positively associated with work related flow and creative work behavior. Likewise, work related flow was positively aligned with creative work behavior. Furthermore, type of organizational structure significantly moderated the relationship of perceived authentic leadership with work related flow and creative work behavior. Results of independent sample t-test showed that employees working in flat organization reflected better perceptions of authentic leadership; higher work related flow and elevated levels of creative work behavior as compared to those working in tall organizations. It was also found that employees with extended job experience and more job duration in the same organization displayed better perceptions of authentic leadership, reported more work related flow and augmented levels of creative work behavior. Findings of the present study distinctively highlighted the similarities as well as differences in the interactions of major constructs which function differentially in the context of tall (banks) and flat (software companies) organizations. Implications of the present study for employees and management as well as future recommendations were also discussed.

Keywords: creative work behavior, organizational structure, perceived authentic leadership, work related flow

Procedia PDF Downloads 368
116 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays

Authors: Maher Z. Mohammed, Barry G. Clarke

Abstract:

As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.

Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio

Procedia PDF Downloads 135
115 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 229
114 Investigating Sub-daily Responses of Water Flow of Trees in Tropical Successional Forests in Thailand

Authors: Pantana Tor-Ngern

Abstract:

In the global water cycle, tree water use (Tr) largely contributes to evapotranspiration which is the total amount of water evaporated from terrestrial ecosystems to the atmosphere, regulating climates. Tree water use responds to environmental factors, including atmospheric humidity and sunlight (represented by vapor pressure deficit or VPD and photosynthetically active radiation or PAR, respectively) and soil moisture. In forests, Tr responses to such factors depend on species and their spatial and temporal variations. Tropical forests in Southeast Asia (SEA) have experienced land-use conversion from abandoned agricultural practices, resulting in patches of forests at different stages including old-growth and secondary forests. Because the inherent structures, such as canopy height and tree density, significantly vary among forests at different stages and can strongly affect their respective microclimate, Tr and its responses to changing environmental conditions in successional forests may differ. Daily and seasonal variations in the environmental factors may exert significant impacts on the respective Tr patterns. Extrapolating Tr data from short periods of days to longer periods of seasons or years can be complex and is important for estimating long-term ecosystem water use which often includes normal and abnormal climatic conditions. Thus, this study aims to investigate the diurnal variation of Tr, using measured sap flux density (JS) data, with changes in VPD in eight evergreen tree species in an old-growth forest (hereafter OF; >200 years old) and a young forest (hereafter YF, <10 years old) in Khao Yai National Park, Thailand. The studied species included Sysygium syzygoides, Aquilaria crassna, Cinnamomum subavenium, Nephelium melliferum, Altingia excelsa in OF, and Syzygium nervosum and Adinandra integerrima in YF. Only Sysygium antisepticum was found in both forest stages. Specifically, hysteresis, which indicates the asymmetrical changes of JS in response to changing VPD across daily timescale, was examined in these species. Results showed no hysteresis in all species in OF, except Altingia excelsa which exhibited a 3-hour delayed JS response to VPD. In contrast, JS of all species in YF displayed one-hour delayed responses to VPD. The OF species that showed no hysteresis indicated their well-coupling of their canopies with the atmosphere, facilitating the gas exchange which is essential for tree growth. The delayed responses in Altingia excelsa in OF and all species in YF were associated with higher JS in the morning than that in the afternoon. This implies that these species were sensitive to drying air, closing stomata relatively rapidly compared to the decreasing atmospheric humidity (VPD). Such behavior is often observed in trees growing in dry environments. This study suggests that detailed investigation of JS at sub-daily timescales is imperative for better understanding of mechanistic responses of trees to the changing climate, which will benefit the improvement of earth system models.

Keywords: sap flow, tropical forest, forest succession, thermal dissipcation probe

Procedia PDF Downloads 35
113 Optimization of the Jatropha curcas Supply Chain as a Criteria for the Implementation of Future Collection Points in Rural Areas of Manabi-Ecuador

Authors: Boris G. German, Edward Jiménez, Sebastián Espinoza, Andrés G. Chico, Ricardo A. Narváez

Abstract:

The unique flora and fauna of The Galapagos Islands has leveraged a tourism-driven growth in the islands. Nonetheless, such development is energy-intensive and requires thousands of gallons of diesel each year for thermoelectric electricity generation. The needed transport of fossil fuels from the continent has generated oil spillages and affectations to the fragile ecosystem of the islands. The Zero Fossil Fuels initiative for The Galapagos proposed by the Ecuadorian government as an alternative to reduce the use of fossil fuels in the islands, considers the replacement of diesel in thermoelectric generators, by Jatropha curcas vegetable oil. However, the Jatropha oil supply cannot entirely cover yet the demand for electricity generation in Galapagos. Within this context, the present work aims to provide an optimization model that can be used as a selection criterion for approving new Jatropha Curcas collection points in rural areas of Manabi-Ecuador. For this purpose, existing Jatropha collection points in Manabi were grouped under three regions: north (7 collection points), center (4 collection points) and south (9 collection points). Field work was carried out in every region in order to characterize the collection points, to establish local Jatropha supply and to determine transportation costs. Data collection was complemented using GIS software and an objective function was defined in order to determine the profit associated to Jatropha oil production. The market price of both Jatropha oil and residual cake, were considered for the total revenue; whereas Jatropha price, transportation and oil extraction costs were considered for the total cost. The tonnes of Jatropha fruit and seed, transported from collection points to the extraction plant, were considered as variables. The maximum and minimum amount of the collected Jatropha from each region constrained the optimization problem. The supply chain was optimized using linear programming in order to maximize the profits. Finally, a sensitivity analysis was performed in order to find a profit-based criterion for the acceptance of future collection points in Manabi. The maximum profit reached a value of $ 4,616.93 per year, which represented a total Jatropha collection of 62.3 tonnes Jatropha per year. The northern region of Manabi had the biggest collection share (69%), followed by the southern region (17%). The criteria for accepting new Jatropha collection points in the rural areas of Manabi can be defined by the current maximum profit of the zone and by the variation in the profit when collection points are removed one at a time. The definition of new feasible collection points plays a key role in the supply chain associated to Jatropha oil production. Therefore, a mathematical model that assists decision makers in establishing new collection points while assuring profitability, contributes to guarantee a continued Jatropha oil supply for Galapagos and a sustained economic growth in the rural areas of Ecuador.

Keywords: collection points, Jatropha curcas, linear programming, supply chain

Procedia PDF Downloads 409