Search results for: statistical tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8427

Search results for: statistical tool

987 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 248
986 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations

Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim

Abstract:

A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.

Keywords: gerotor pump, high speed, numerical simulations, aeronautic, aeration, cavitation

Procedia PDF Downloads 120
985 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 388
984 Effects of Forest Therapy on Depression among Healthy Adults 

Authors: Insook Lee, Heeseung Choi, Kyung-Sook Bang, Sungjae Kim, Minkyung Song, Buhyun Lee

Abstract:

Backgrounds: A clearer and comprehensive understanding of the effects of forest therapy on depression is needed for further refinements of forest therapy programs. The purpose of this study was to review the literature on forest therapy programs designed to decrease the level of depression among adults to evaluate current forest therapy programs. Methods: This literature review was conducted using various databases including PubMed, EMBASE, CINAHL, PsycArticle, KISS, RISS, and DBpia to identify relevant studies published up to January 2016. The two authors independently screened the full text articles using the following criteria: 1) intervention studies assessing the effects of forest therapy on depression among healthy adults ages 18 and over; 2) including at least one control group or condition; 3) being peer-reviewed; and 4) being published either in English. The Scottish Intercollegiate Guideline Network (SIGN) measurement tool was used to assess the risk of bias in each trial. Results: After screening current literature, a total of 14 articles (English: 6, Korean: 8) were included in the present review. None of the studies used randomized controlled (RCT) study design and the sample size ranged from 11 to 300. Walking in the forest and experiencing the forest using the five senses was the key component of the forest therapy that was included in all studies. The majority of studies used one-time intervention that usually lasted a few hours or half-day. The most widely used measure for depression was Profile of Mood States (POMS). Most studies used self-reported, paper-and-pencil tests, and only 5 studies used both paper-and-pencil tests and physiological measures. Regarding the quality assessment based on the SIGN criteria, only 3 articles were rated ‘acceptable’ and the rest of the 14 articles were rated ‘low quality.’ Regardless of the diversity in format and contents of forest therapies, most studies showed a significant effect of forest therapy in curing depression. Discussions: This systematic review showed that forest therapy is one of the emerging and effective intervention approaches for decreasing the level of depression among adults. Limitations of the current programs identified from the review were as follows; 1) small sample size; 2) a lack of objective and comprehensive measures for depression; and 3) inadequate information about research process. Futures studies assessing the long-term effect of forest therapy on depression using rigorous study designs are needed.

Keywords: forest therapy, systematic review, depression, adult

Procedia PDF Downloads 279
983 Luteolin Exhibits Anti-Diabetic Effects by Increasing Oxidative Capacity and Regulating Anti-Oxidant Metabolism

Authors: Eun-Young Kwon, Myung-Sook Choi, Su-Jung Cho, Ji-Young Choi, So Young Kim, Youngji Han

Abstract:

Overweight and obesity have been linked to a low-grade chronic inflammatory response and an increased risk of developing metabolic syndrome including insulin resistance, type 2 diabetes mellitus and certain types of cancers. Luteolin is a dietary flavonoid with anti-inflammatory, anti-oxidant, anti-cancer and anti-diabetic properties. However, little is known about the detailed mechanism associated with the effect of luteolin on inflammation-related obesity and its complications. The aim of the present study was to reveal the anti-diabetic effect of luteolin in diet-induced obesity mice using “transcriptomics” tool. Thirty-nine male C57BL/6J mice (4-week-old) were randomly divided into 3 groups and were fed normal diet, high-fat diet (HFD, 20% fat) and HFD+0.005% (w/w) luteolin for 16 weeks. Luteolin improved insulin resistance, as measured by HOMA-IR and glucose tolerance, along with preservation action of pancreatic β-cells, compared to the HFD group. Luteoiln was significantly decreased the levels of leptin and ghrelin that play a pivotal role in energy balance, and the macrophage low-grade inflammation marker sCD163 (soluble Cd antigen 163) in plasma. Activities of hepatic anti-oxidant enzymes (catalase and glutathione peroxidase) were increased, while the levels of plasma transaminase (GOT and GPT) and oxidative damage markers (hepatic mitochondria H2O2 and TBARS) were markedly decreased by luteolin supplementation. In addition, luteolin increased oxidative capacity and fatty acid utilization by presenting decrease in enzyme activities of citrate synthase, cytochrome C oxidase and β-hydroxyacyl CoA dehydrogenase and UCP3 gene expression compared to high-fat diet. Moreover, our microarray results of muscle also revealed down-regulated gene expressions associated with TCA cycle by HFD were reversed to normal level by luteolin treatment. Taken together, our results indicate that luteolin is one of bioactive components for improving insulin resistance by increasing oxidative capacity, modulating anti-oxidant metabolism and suppressing inflammatory signaling cascades in diet-induced obese mice. These results provide possible therapeutic targets for prevention and treatment of diet-induced obesity and its complications.

Keywords: anti-oxidant metabolism, diabetes, luteolin, oxidative capacity

Procedia PDF Downloads 324
982 The Effect of Acute Muscular Exercise and Training Status on Haematological Indices in Adult Males

Authors: Ibrahim Musa, Mohammed Abdul-Aziz Mabrouk, Yusuf Tanko

Abstract:

Introduction: Long term physical training affect the performance of athletes especially the females. Soccer which is a team sport, played in an outdoor field, require adequate oxygen transport system for the maximal aerobic power during exercise in order to complete 90 minutes of competitive play. Suboptimal haematological status has often been recorded in athletes with intensive physical activity. It may be due to the iron depletion caused by hemolysis or haemodilution results from plasma volume expansion. There is lack of data regarding the dynamics of red blood cell variables, in male football players. We hypothesized that, a long competitive season involving frequent matches and intense training could influence red blood cell variables, as a consequence of applying repeated physical loads when compared with sedentary. Methods: This cross sectional study was carried on 40 adult males (20 athletes and 20 non athletes) between 18-25 years of age. The 20 apparently healthy male non athletes were taken as sedentary and 20 male footballers comprise the study group. The university institutional review board (ABUTH/HREC/TRG/36) gave approval for all procedures in accordance with the Declaration of Helsinki. Red blood cell (RBC) concentration, packed cell volume (PCV), and plasma volume were measured in fasting state and immediately after exercise. Statistical analysis was done by using SPSS/ win.20.0 for comparison within and between the groups, using student’s paired and unpaired “t” test respectively. Results: The finding from our study shows that, immediately after termination of exercise, the mean RBC counts and PCV significantly (p<0.005) decreased with significant increased (p<0.005) in plasma volume when compared with pre-exercised values in both group. In addition the post exercise RBC was significantly higher in untrained (261.10±8.5) when compared with trained (255.20±4.5). However, there was no significant differences in the post exercise hematocrit and plasma volume parameters between the sedentary and the footballers. Moreover, beside changes in pre-exercise values among the sedentary and the football players, the resting red blood cell counts and Plasma volume (PV %) was significantly (p < 0.05) higher in the sedentary group (306.30±10.05 x 104 /mm3; 58.40±0.54%) when compared with football players (293.70±4.65 x 104 /mm3; 55.60±1.18%). On the other hand, the sedentary group exhibited significant (p < 0.05) decrease in PCV (41.60±0.54%) when compared with the football players (44.40±1.18%). Conclusions: It is therefore proposed that the acute football exercise induced reduction in RBC and PCV is entirely due to plasma volume expansion, and not of red blood cell hemolysis. In addition, the training status also influenced haematological indices of male football players differently from the sedentary at rest due to adaptive response. This is novel.

Keywords: Haematological Indices, Performance Status, Sedentary, Male Football Players

Procedia PDF Downloads 242
981 Outcomes of Pregnancy in Women with TPO Positive Status after Appropriate Dose Adjustments of Thyroxin: A Prospective Cohort Study

Authors: Revathi S. Rajan, Pratibha Malik, Nupur Garg, Smitha Avula, Kamini A. Rao

Abstract:

This study aimed to analyse the pregnancy outcomes in patients with TPO positivity after appropriate L-Thyroxin supplementation with close surveillance. All pregnant women attending the antenatal clinic at Milann-The Fertility Center, Bangalore, India- from Aug 2013 to Oct 2014 whose booking TSH was more than 2.5 mIU/L were included along with those pregnant women with prior hypothyroidism who were TPO positive. Those with TPO positive status were vigorously managed with appropriate thyroxin supplementation and the doses were readjusted every 3 to 4 weeks until delivery. Women with recurrent pregnancy loss were also tested for TPO positivity and if tested positive, were monitored serially with TSH and fT4 levels every 3 to 4 weeks and appropriately supplemented with thyroxin when the levels fluctuated. The testing was done after an informed consent in all these women. The statistical software namely SAS 9.2, SPSS 15.0, Stata 10.1, MedCalc 9.0.1, Systat 12.0 and R environment ver.2.11.1 were used for the analysis of the data. 460 pregnant women were screened for thyroid dysfunction at booking of which 52% were hypothyroid. Majority of them (31.08%) were subclinically hypothyroid and the remaining were overt. 25% of the total no. of patients screened were TPO positive. The various pregnancy complications that were observed in the TPO positive women were gestational glucose intolerance [60%], threatened abortion [21%], midtrimester abortion [4.3%], premature rupture of membranes [4.3%], cervical funneling [4.3%] and fetal growth restriction [3.5%]. 95.6% of the patients who followed up till the end delivered beyond 30 weeks. 42.6% of these patients had previous history of recurrent abortions or adverse obstetric outcome and 21.7% of the delivered babies required NICU admission. Obstetric outcomes in our study in terms of midtrimester abortions, placental abruption, and preterm delivery improved for the better after close monitoring of the thyroid hormone [TSH and fT4] levels every 3 to 4 weeks with appropriate dose adjustment throughout pregnancy. Euthyroid women with TPO positive status enrolled in the study incidentally were those with recurrent abortions/infertility and required thyroxin supplements due to elevated Thyroid hormone (TSH, fT4) levels during the course of their pregnancy. Significant associations were found with age>30 years and Hyperhomocysteinemia [p=0.017], recurrent pregnancy loss or previous adverse obstetric outcomes [p=0.067] and APLA [p=0.029]. TPO antibody levels >600 I U/ml were significantly associated with development of gestational hypertension [p=0.041] and fetal growth restriction [p=0.082]. Euthyroid women with TPO positivity were also screened periodically to counter fluctuations of the thyroid hormone levels with appropriate thyroxin supplementation. Thus, early identification along with aggressive management of thyroid dysfunction and stratification of these patients based on their TPO status with appropriate thyroxin supplementation beginning in the first trimester will aid risk modulation and also help avert complications.

Keywords: TPO antibody, subclinical hypothyroidism, anti nuclear antibody, thyroxin

Procedia PDF Downloads 308
980 Qualitative Profiling in Practice: The Italian Public Employment Services Experience

Authors: L. Agneni, F. Carta, C. Micheletta, V. Tersigni

Abstract:

The development of a qualitative method to profile jobseekers is needed to improve the quality of the Public Employment Services (PES) in Italy. This is why the National Agency for Active Labour Market Policies (ANPAL) decided to introduce a Qualitative Profiling Service in the context of the activities carried out by local employment offices’ operators. The qualitative profiling service provides information and data regarding the jobseeker’s personal transition status, through a semi-structured questionnaire administered to PES clients during the guidance interview. The questionnaire responses allow PES staff to identify, for each client, proper activities and policy measures to support jobseekers in their reintegration into the labour market. Data and information gathered by the qualitative profiling tool are the following: frequency, modalities and motivations for clients to apply to local employment offices; clients’ expectations and skills; difficulties that they have faced during the previous working experiences; strategies, actions undertaken and activated channels for job search. These data are used to assess jobseekers’ personal and career characteristics and to measure their employability level (qualitative profiling index), in order to develop and deliver tailor-made action programmes for each client. This paper illustrates the use of the above-mentioned qualitative profiling service on the national territory and provides an overview of the main findings of the survey: concerning the difficulties that unemployed people face in finding a job and their perception of different aspects related to the transition in the labour market. The survey involved over 10.000 jobseekers registered with the PES. Most of them are beneficiaries of the “citizens' income”, a specific active labour policy and social inclusion measure. Furthermore, data analysis allows classifying jobseekers into a specific group of clients with similar features and behaviours, on the basis of socio-demographic variables, customers' expectations, needs and required skills for the profession for which they seek employment. Finally, the survey collects PES staff opinions and comments concerning clients’ difficulties in finding a new job and also their strengths. This is a starting point for PESs’ operators to define adequate strategies to facilitate jobseekers’ access or reintegration into the labour market.

Keywords: labour market transition, public employment services, qualitative profiling, vocational guidance

Procedia PDF Downloads 122
979 Green Ports: Innovation Adopters or Innovation Developers

Authors: Marco Ferretti, Marcello Risitano, Maria Cristina Pietronudo, Lina Ozturk

Abstract:

A green port is the result of a sustainable long-term strategy adopted by an entire port infrastructure, therefore by the set of actors involved in port activities. The strategy aims to realise the development of sustainable port infrastructure focused on the reduction of negative environmental impacts without jeopardising economic growth. Green technology represents the core tool to implement sustainable solutions, however, they are not a magic bullet. Ports have always been integrated in the local territory affecting the environment in which they operate, therefore, the sustainable strategy should fit with the entire local systems. Therefore, adopting a sustainable strategy means to know how to involve and engage a wide stakeholders’ network (industries, production, markets, citizens, and public authority). The existing research on the topic has not well integrated this perspective with those of sustainability. Research on green ports have mixed the sustainability aspects with those on the maritime industry, neglecting dynamics that lead to the development of the green port phenomenon. We propose an analysis of green ports adopting the lens of ecosystem studies in the field of management. The ecosystem approach provides a way to model relations that enable green solutions and green practices in a port ecosystem. However, due to the local dimension of a port and the port trend on innovation, i.e., sustainable innovation, we draw to a specific concept of ecosystem, those on local innovation systems. More precisely, we explore if a green port is a local innovation system engaged in developing sustainable innovation with a large impact on the territory or merely an innovation adopter. To address this issue, we adopt a comparative case study selecting two innovative ports in Europe: Rotterdam and Genova. The case study is a research method focused on understanding the dynamics in a specific situation and can be used to provide a description of real circumstances. Preliminary results show two different approaches in supporting sustainable innovation: one represented by Rotterdam, a pioneer in competitiveness and sustainability, and the second one represented by Genoa, an example of technology adopter. The paper intends to provide a better understanding of how sustainable innovations are developed and in which manner a network of port and local stakeholder support this process. Furthermore, it proposes a taxonomy of green ports as developers and adopters of sustainable innovation, suggesting also best practices to model relationships that enable the port ecosystem in applying a sustainable strategy.

Keywords: green port, innovation, sustainability, local innovation systems

Procedia PDF Downloads 102
978 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 227
977 A Comparative Study of Cardio Respiratory Efficiency between Aquatic and Track and Field Performers

Authors: Sumanta Daw, Gopal Chandra Saha

Abstract:

The present study was conducted to explore the basic pulmonary functions which may generally vary according to the bio-physical characteristics including age, height, body weight, and environment etc. of the sports performers. Regular and specific training exercises also change the characteristics of an athlete’s prowess and produce a positive effect on the physiological functioning, mostly upon cardio-pulmonary efficiency and thereby improving the body mechanism. The objective of the present study was to compare the differences in cardio-respiratory functions between aquatics and track and field performers. As cardio-respiratory functions are influenced by pulse rate and blood pressure (systolic and diastolic), so both of the factors were also taken into consideration. The component selected under cardio-respiratory functions for the present study were i) FEVI/FVC ratio (forced expiratory volume divided by forced vital capacity ratio, i.e. the number represents the percentage of lung capacity to exhale in one second) ii) FVC1 (this is the amount of air which can force out of lungs in one second) and iii) FVC (forced vital capacity is the greatest total amount of air forcefully breathe out after breathing in as deeply as possible). All the three selected components of the cardio-respiratory efficiency were measured by spirometry method. Pulse rate was determined manually. The radial artery which is located on the thumb side of our wrist was used to assess the pulse rate. Blood pressure was assessed by sphygmomanometer. All the data were taken in the resting condition. 36subjects were selected for the present study out of which 18were water polo players and rest were sprinters. The age group of the subjects was considered between 18 to 23 years. In this study the obtained data inform of digital score were treated statistically to get result and draw conclusions. The Mean and Standard Deviation (SD) were used as descriptive statistics and the significant difference between the two subject groups was assessed with the help of statistical ‘t’-test. It was found from the study that all the three components i.e. FEVI/FVC ratio (p-value 0.0148 < 0.01), FVC1 (p-value 0.0010 < 0.01) and FVC (p-value 0.0067 < 0.01) differ significantly as water polo players proved to be better in terms of cardio-respiratory functions than sprinters. Thus study clearly suggests that the exercise training as well as the medium of practice arena associated with water polo players has played an important role to determine better cardio respiratory efficiency than track and field athletes. The outcome of the present study revealed that the lung function in land-based activities may not provide much impact than that of in water activities.

Keywords: cardio-respiratory efficiency, spirometry, water polo players, sprinters

Procedia PDF Downloads 117
976 Prevalence of Fast-Food Consumption on Overweight or Obesity on Employees (Age Between 25-45 Years) in Private Sector; A Cross-Sectional Study in Colombo, Sri Lanka

Authors: Arosha Rashmi De Silva, Ananda Chandrasekara

Abstract:

This study seeks to comprehensively examine the influence of fast-food consumption and physical activity levels on the body weight of young employees within the private sector of Sri Lanka. The escalating popularity of fast food has raised concerns about its nutritional content and associated health ramifications. To investigate this phenomenon, a cohort of 100 individuals aged between 25 and 45, employed in Sri Lanka's private sector, participated in this research. These participants provided socio-demographic data through a standardized questionnaire, enabling the characterization of their backgrounds. Additionally, participants disclosed their frequency of fast-food consumption and engagement in physical activities, utilizing validated assessment tools. The collected data was meticulously compiled into an Excel spreadsheet and subjected to rigorous statistical analysis. Descriptive statistics, such as percentages and proportions, were employed to delineate the body weight status of the participants. Employing chi-square tests, our study identified significant associations between fast-food consumption, levels of physical activity, and body weight categories. Furthermore, through binary logistic regression analysis, potential risk factors contributing to overweight and obesity within the young employee cohort were elucidated. Our findings revealed a disconcerting trend, with 6% of participants classified as underweight, 32% within the normal weight range, and a substantial 62% categorized as overweight or obese. These outcomes underscore the alarming prevalence of overweight and obesity among young private-sector employees, particularly within the bustling urban landscape of Colombo, Sri Lanka. The data strongly imply a robust correlation between fast-food consumption, sedentary behaviors, and higher body weight categories, reflective of the evolving lifestyle patterns associated with the nation's economic growth. This study emphasizes the urgent need for effective interventions to counter the detrimental effects of fast-food consumption. The implementation of awareness campaigns elucidating the adverse health consequences of fast food, coupled with comprehensive nutritional education, can empower individuals to make informed dietary choices. Workplace interventions, including the provision of healthier meal alternatives and the facilitation of physical activity opportunities, are essential in fostering a healthier workforce and mitigating the escalating burden of overweight and obesity in Sri Lanka

Keywords: fast food consumption, obese, overweight, physical activity level

Procedia PDF Downloads 28
975 Eucalyptus camaldulensis Leaves Attacked by the Gall Wasp Leptocybe invasa: A Phyto-Volatile Constituents Study

Authors: Maged El-Sayed Mohamed

Abstract:

Eucalyptus camaldulensis is one on the most well-known species of the genus Eucalyptus in the Middle east, its importance relay on the high production of its unique volatile constituents which exhibits many medicinal and pharmacological activities. The gall-forming wasp (Leptocybe invasa) has recently come into sight as the main pest attacking E. camaldulensis and causing severe injury. The wasp lays its eggs in the petiole and midrib of leaves and stems of young shoots of E. camaldulensis, which leads to gall formation. Gall formation by L. invasa damages growing shoot and leaves of Eucalyptus, resulting in abscission of leaves and drying. AIM: This study is an attempt to investigate the effect of the gall wasp (Leptocybe invasa) attack on the volatile constitutes of E. camaldulensis. This could help in the control of this wasp through stimulating plant defenses or production of a new allelochemicals or insecticide. The study of volatile constitutes of Eucalyptus before and after attack by the wasp can help the re-use and recycle of the infected Eucalyptus trees for new pharmacological and medicinal activities. Methodology: The fresh gall wasp-attacked and healthy leaves (100 g each) were cut and immediately subjected to hydrodistillation using Clevenger-type apparatus for 3 hours. The volatile fractions isolated were analyzed using Gas chromatography/mass spectrometry (GC/MS). Kovat’s retention indices (RI) were calculated with respect to a set of co-injected standard hydrocarbons (C10-C28). Compounds were identified by comparing their spectral data and retention indices with Wiley Registry of Mass Spectral Data 10th edition (April 2013), NIST 11 Mass Spectral Library (NIST11/2011/EPA/NIH) and literature data. Results: Fifty-nine components representing 89.13 and 88.60% of the total volatile fraction content respectively were quantitatively analyzed. Twenty-six major compounds at an average concentration greater than 0.1 ± 0.02% have been used for the statistical comparison. From those major components, twenty-one were found in both the attacked and healthy Eucalyptus leaves’ fractions in different concentration and five components, mono terpene p-Mentha-2-4(8) diene and the sesquiterpenes δ-elemene, β-elemene, E-caryophyllene and Bicyclogermacrene, were unique and only produced in the attacked-leaves’ fraction. CONCLUSION: Newly produced components or those commonly found in the volatile fraction and changed in concentration could represent a part of the plant defense mechanisms or might be an element of the plant allelopathic and communication mechanisms. Identification of the components of the gall wasp-damaged leaves can help in their recycling for different physiological, pharmacological and medicinal uses.

Keywords: Eucalyptus camaldulensis, eucalyptus recycling, gall wasp, Leptocybe invasa, plant defense mechanisms, Terpene fraction

Procedia PDF Downloads 342
974 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 36
973 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 327
972 Assessment of Drinking Water Contamination from the Water Source to the Consumer in Palapye Region, Botswana

Authors: Tshegofatso Galekgathege

Abstract:

Poor water quality is of great concern to human health as it can cause disease outbreaks. A standard practice today, in developed countries, is that people should be provided with safe-reliable drinking water, as safe drinking water is recognized as a basic human right and a cost effective measure of reducing diseases. Over 1.1 billion people worldwide lack access to a safe water supply and as a result, the majority are forced to use polluted surface or groundwater. It is widely accepted that our water supply systems are susceptible to the intentional or accidental contamination .Water quality degradation may occur anywhere in the path that water takes from the water source to the consumer. Chlorine is believed to be an effective tool in disinfecting water, but its concentration may decrease with time due to consumption by chemical reactions. This shows that we are at the risk of being infected by waterborne diseases if chlorine in water falls below the required level of 0.2-1mg/liter which should be maintained in water and some contaminants enter into the water distribution system. It is believed that the lack of adequate sanitation also contributes to the contamination of water globally. This study therefore, assesses drinking water contamination from the source to the consumer by identifying the point vulnerable to contamination from the source to the consumer in the study area .To identify the point vulnerable to contamination, water was sampled monthly from boreholes, water treatment plant, water distribution system (WDS), service reservoirs and consumer taps from all the twenty (20) villages of Palapye region. Sampled water was then taken to the laboratory for testing and analysis of microbiological and chemical parameters. Water quality analysis were then compared with Botswana drinking water quality standards (BOS32:2009) to see if they comply. Major sources of water contamination identified during site visits were the livestock which were found drinking stagnant water from leaking pipes in 90 percent of the villages. Soils structure around the area was negatively affected because of livestock movement even vegetation in the area. In conclusion microbiological parameters of water in the study area do not comply with drinking water standards, some microbiological parameters in water indicated that livestock do not only affect land degradation but also the quality of water. Chlorine has been applied to water over some years but it is not effective enough thus preventative measures have to be developed, to prevent contaminants from reaching water. Remember: Prevention is better than cure.

Keywords: land degradation, leaking systems, livestock, water contamination

Procedia PDF Downloads 339
971 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows

Authors: C. D. Ellis, H. Xia, X. Chen

Abstract:

Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.

Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics

Procedia PDF Downloads 217
970 Comparison of the Effect of Nano Calcium Carbonate and CaCO₃ on Egg Production, Egg Traits and Calcium Retention in Laying Japanese Quail

Authors: Farhad Ahmadi, Hammed Kimiaee

Abstract:

Context: This research study focuses on the effect of different levels and sources of calcium on egg production, egg traits, and calcium retention in laying Japanese quail. The study aims to determine the impact of nano calcium carbonate (NCC) and calcium carbonate (CC) on these factors. Research Aim: The main objective of this research is to investigate the effect of different levels and sources of calcium on egg production, egg traits, and calcium retention in laying Japanese quail. Specifically, the study aims to compare the effects of NCC and CC on these parameters. Methodology: The research was conducted using a total of 280 laying quail with an average age of 8 weeks. The quails were randomly distributed in a completely randomized design (CRD) with 7 treatments, 4 replications, and 10 quails in each pen. The study lasted for 90 days. The experimental diets included a control group (T1) with a basal diet consisting of 3.17% CaCO₃, and other groups supplemented with different levels (0.5%, 0.1%, and 0.15%) of either calcium carbonate (CC) or nano calcium carbonate (NCC). The quails had free access to water and feed throughout the study period. Findings: The results of the study showed that NCC at the levels of 0.1% and 0.15% (T6 and T7) improved eggshell thickness, shell thickness, and shell breaking strength compared to the control group. Although not statistically significant, there was an increasing trend in quail egg production and calcium retention in the calcareous shell of the egg in birds that consumed the experimental diets containing different levels of NCC compared to the control and other treatment groups. Theoretical Importance: This research contributes to our understanding of the effect of NCC and CC on egg production, egg traits, and calcium retention in laying Japanese quail. It highlights the potential benefits of using NCC as a calcium source in quail diets, specifically in improving the quantity and quality of eggs and calcium retention. Data Collection and Analysis Procedures: Quail egg production was recorded monthly for each treatment group. At the end of the study, a total of 40 eggs (10 eggs/replicate) from each treatment group were randomly selected for analysis. Parameters such as eggshell thickness, shell thickness, shell breaking strength, and calcium retention were measured. Statistical analysis was performed to compare the results between the different treatment groups. Questions Addressed: This research aimed to answer the following questions: What is the effect of different levels and sources of calcium on egg production, egg traits, and calcium retention in laying Japanese quail? How does nano calcium carbonate compare to calcium carbonate in terms of these parameters? Conclusion: In conclusion, this study suggests that NCC at the levels of 0.1% and 0.15% can improve the quantity and quality of eggs and calcium retention in laying Japanese quail. These findings highlight the potential benefits of using NCC as a calcium source in quail diets. Further research could be conducted to explore the mechanisms behind these improvements and optimize the dosage of NCC for maximum effect.

Keywords: egg, calcium, nanoparticles, retention

Procedia PDF Downloads 50
969 Body, Experience, Sense, and Place: Past and Present Sensory Mappings of Istiklal Street in Istanbul

Authors: Asiye Nisa Kartal

Abstract:

An attempt to recognize the undiscovered bounds of Istiklal Street in Istanbul between its sensory experiences (intangible qualities) and physical setting (tangible qualities) could be taken as the first inspiration point for this study. ‘The dramatic physical changes’ and ‘their current impacts on sensory attributions’ of Istiklal Street have directed this study to consider the role of changing the physical layout on sensory dimensions which have a subtle but important role in the examination of urban places. The public places have always been subject to transformation, so in the last years, the changing socio-cultural structure, economic and political movements, law and city regulations, innovative transportation and communication activities have resulted in a controversial modification of Istanbul. And, as the culture, entertainment, tourism, and shopping focus of Istanbul, Istiklal Street has witnessed different changing stages within the last years. In this process, because of the projects being implemented, many buildings such as cinemas, theatres, and bookstores have restored, moved, converted, closed and demolished which have been significant elements in terms of the qualitative value of this area. And, the multi-layered socio-cultural, and architectural structure of Istiklal Street has been changing in a dramatical and controversial way. But importantly, while the physical setting of Istiklal Street has changed, the transformation has not been spatial, socio-cultural, economic; avoidably the sensory dimensions of Istiklal Street which have great importance in terms of intangible qualities of this area have begun to lose their distinctive features. This has created the challenge of this research. As the main hypothesis, this study claims that the physical transformations have led to change in the sensory characteristic of Istiklal Street, therefore the Sensescape of Istiklal Street deserve to be recorded, decoded and promoted as expeditiously as possible to observe the sensory reflections of physical transformations in this area. With the help of the method of ‘Sensewalking’ which is an efficient research tool to generate knowledge on sensory dimensions of an urban settlement, this study suggests way of ‘mapping’ to understand how do ‘changes of physical setting’ play role on ‘sensory qualities’ of Istiklal Street which have been changed or lost over time. Basically, this research focuses on the sensory mapping of Istiklal Street from the 1990s until today to picture, interpret, criticize the ‘sensory mapping of Istiklal Street in present’ and the ‘sensory mapping of Istiklal Street in past’. Through the sensory mapping of Istiklal Street, this study intends to increase the awareness about the distinctive sensory qualities of places. It is worthwhile for further studies that consider the sensory dimensions of places especially in the field of architecture.

Keywords: Istiklal street, sense, sensewalking, sensory mapping

Procedia PDF Downloads 156
968 Association of Brain Derived Neurotrophic Factor with Iron as well as Vitamin D, Folate and Cobalamin in Pediatric Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The impact of metabolic syndrome (MetS) on cognition and functions of the brain is being investigated. Iron deficiency and deficiencies of B9 (folate) as well as B12 (cobalamin) vitamins are best-known nutritional anemias. They are associated with cognitive disorders and learning difficulties. The antidepressant effects of vitamin D are known and the deficiency state affects mental functions negatively. The aim of this study is to investigate possible correlations of MetS with serum brain-derived neurotrophic factor (BDNF), iron, folate, cobalamin and vitamin D in pediatric patients. 30 children, whose age- and sex-dependent body mass index (BMI) percentiles vary between 85 and 15, 60 morbid obese children with above 99th percentiles constituted the study population. Anthropometric measurements were taken. BMI values were calculated. Age- and sex-dependent BMI percentile values were obtained using the appropriate tables prepared by the World Health Organization (WHO). Obesity classification was performed according to WHO criteria. Those with MetS were evaluated according to MetS criteria. Serum BDNF was determined by enzyme-linked immunosorbent assay. Serum folate was analyzed by an immunoassay analyzer. Serum cobalamin concentrations were measured using electrochemiluminescence immunoassay. Vitamin D status was determined by the measurement of 25-hydroxycholecalciferol [25-hydroxy vitamin D3, 25(OH)D] using high performance liquid chromatography. Statistical evaluations were performed using SPSS for Windows, version 16. The p values less than 0.05 were accepted as statistically significant. Although statistically insignificant, lower folate and cobalamin values were found in MO children compared to those observed for children with normal BMI. For iron and BDNF values, no alterations were detected among the groups. Significantly decreased vitamin D concentrations were noted in MO children with MetS in comparison with those in children with normal BMI (p ≤ 0.05). The positive correlation observed between iron and BDNF in normal-BMI group was not found in two MO groups. In THE MetS group, the partial correlation among iron, BDNF, folate, cobalamin, vitamin D controlling for waist circumference and BMI was r = -0.501; p ≤ 0.05. None was calculated in MO and normal BMI groups. In conclusion, vitamin D should also be considered during the assessment of pediatric MetS. Waist circumference and BMI should collectively be evaluated during the evaluation of MetS in children. Within this context, BDNF appears to be a key biochemical parameter during the examination of obesity degree in terms of mental functions, cognition and learning capacity. The association observed between iron and BDNF in children with normal BMI was not detected in MO groups possibly due to development of inflammation and other obesity-related pathologies. It was suggested that this finding may contribute to mental function impairments commonly observed among obese children.

Keywords: brain-derived neurotrophic factor, iron, vitamin B9, vitamin B12, vitamin D

Procedia PDF Downloads 100
967 The Development of Cultural Routes: The Case of Greece

Authors: Elissavet Kosta

Abstract:

Introduction: In this research, we will propose the methodology, which is required for the planning of the cultural route in order to prepare substantiated proposals for the development and planning of cultural routes in Greece in the near future. Our research has started at 2016. Methodology in our research: Α combination of primary and secondary research will be used as project methodology. Furthermore, this study aims to follow a multidisciplinary approach, using dimensions of qualitative and quantitative data analysis models. Regarding the documentation of the theoretical part of the project, the method of secondary research will be mainly used, yet in combination with bibliographic sources. However, the data collection regarding the research topic will be conducted exclusively through primary research (questionnaires and interviews). Cultural Routes: The cultural route is defined as a brand name touristic product, that is a product of cultural tourism, which is shaped according to a specific connecting element. Given its potential, the cultural route is an important ‘tool’ for the management and development of cultural heritage. Currently, a constant development concerning the cultural routes is observed in an international level during the last decades, as it is widely accepted that cultural tourism has an important role in the world touristic industry. Cultural Routes in Greece: Especially for Greece, we believe, actions have not been taken to the systematic development of the cultural routes yet. The cultural routes that include Greece and have been design in a world scale as well as the cultural routes, which have been design in Greek ground up to this moment are initiations of the Council of Europe, World Tourism Organization UNWTO and ‘Diazoma’ association. Regarding the study of cultural routes in Greece as a multidimensional concept, the following concerns have arisen: Firstly, we are concerned about the general impact of cultural routes at local and national level and specifically in the economic sector. Moreover, we deal with the concerns regarding the natural environment and we delve into the educational aspect of cultural routes in Greece. In addition, the audience we aim at is both specific and broad and we put forward the institutional framework of the study. Finally, we conduct the development and planning of new cultural routes, having in mind the museums as both the starting and ending point of a route. Conclusion: The contribution of our work is twofold and lies firstly on the fact that we attempt to create cultural routes in Greece and secondly on the fact that an interdisciplinary approach is engaged towards realizing our study objective. In particular, our aim is to take advantage of all the ways in which the promotion of a cultural route can have a positive influence on the way of life of society. As a result, we intend to analyze how a cultural route can turn into a well-organized activity that can be used as social intervention to develop tourism, strengthen the economy and improve access to cultural goods in Greece during the economic crisis.

Keywords: cultural heritage, cultural routes, cultural tourism, Greece

Procedia PDF Downloads 222
966 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 432
965 Soils Properties of Alfisols in the Nicoya Peninsula, Guanacaste, Costa Rica

Authors: Elena Listo, Miguel Marchamalo

Abstract:

This research studies the soil properties located in the watershed of Jabillo River in the Guanacaste province, Costa Rica. The soils are classified as Alfisols (T. Haplustalfs), in the flatter parts with grazing as Fluventic Haplustalfs or as a consequence of bad drainage as F. Epiaqualfs. The objective of this project is to define the status of the soil, to use remote sensing as a tool for analyzing the evolution of land use and determining the water balance of the watershed in order to improve the efficiency of the water collecting systems. Soil samples were analyzed from trial pits taken from secondary forests, degraded pastures, mature teak plantation, and regrowth -Tectona grandis L. F.- species developed favorably in the area. Furthermore, to complete the study, infiltration measurements were taken with an artificial rainfall simulator, as well as studies of soil compaction with a penetrometer, in points strategically selected from the different land uses. Regarding remote sensing, nearly 40 data samples were collected per plot of land. The source of radiation is reflected sunlight from the beam and the underside of leaves, bare soil, streams, roads and logs, and soil samples. Infiltration reached high levels. The majority of data came from the secondary forest and mature planting due to a high proportion of organic matter, relatively low bulk density, and high hydraulic conductivity. Teak regrowth had a low rate of infiltration because the studies made regarding the soil compaction showed a partial compaction over 50 cm. The secondary forest presented a compaction layer from 15 cm to 30 cm deep, and the degraded pasture, as a result of grazing, in the first 15 cm. In this area, the alfisols soils have high content of iron oxides, a fact that causes a higher reflectivity close to the infrared region of the electromagnetic spectrum (around 700mm), as a result of clay texture. Specifically in the teak plantation where the reflectivity reaches values of 90 %, this is due to the high content of clay in relation to others. In conclusion, the protective function of secondary forests is reaffirmed with regards to erosion and high rate of infiltration. In humid climates and permeable soils, the decrease of runoff is less, however, the percolation increases. The remote sensing indicates that being clay soils, they retain moisture in a better way and it means a low reflectivity despite being fine texture.

Keywords: alfisols, Costa Rica, infiltration, remote sensing

Procedia PDF Downloads 676
964 Effects of Lime and N100 on the Growth and Phytoextraction Capability of a Willow Variety (S. Viminalis × S. Schwerinii × S. Dasyclados) Grown in Contaminated Soils

Authors: Mir Md. Abdus Salam, Muhammad Mohsin, Pertti Pulkkinen, Paavo Pelkonen, Ari Pappinen

Abstract:

Soil and water pollution caused by extensive mining practices can adversely affect environmental components, such as humans, animals, and plants. Despite a generally positive contribution to society, mining practices have become a serious threat to biological systems. As metals do not degrade completely, they require immobilization, toxicity reduction, or removal. A greenhouse experiment was conducted to evaluate the effects of lime and N100 (11-amino-1-hydroxyundecylidene) chelate amendment on the growth and phytoextraction potential of the willow variety Klara (S. viminalis × S. schwerinii × S. dasyclados) grown in soils heavily contaminated with copper (Cu). The plants were irrigated with tap or processed water (mine wastewater). The sequential extraction technique and inductively coupled plasma-mass spectrometry (ICP-MS) tool were used to determine the extractable metals and evaluate the fraction of metals in the soil that could be potentially available for plant uptake. The results suggest that the combined effects of the contaminated soil and processed water inhibited growth parameter values. In contrast, the accumulation of Cu in the plant tissues was increased compared to the control. When the soil was supplemented with lime and N100; growth parameter and resistance capacity were significantly higher compared to unamended soil treatments, especially in the contaminated soil treatments. The combined lime- and N100-amended soil treatment produced higher growth rate of biomass, resistance capacity and phytoextraction efficiency levels relative to either the lime-amended or the N100-amended soil treatments. This study provides practical evidence of the efficient chelate-assisted phytoextraction capability of Klara and highlights its potential as a viable and inexpensive novel approach for in-situ remediation of Cu-contaminated soils and mine wastewaters. Abandoned agricultural, industrial and mining sites can also be utilized by a Salix afforestation program without conflict with the production of food crops. This kind of program may create opportunities for bioenergy production and economic development, but contamination levels should be examined before bioenergy products are used.

Keywords: copper, Klara, lime, N100, phytoextraction

Procedia PDF Downloads 135
963 Exploring the Relationship Between Helicobacter Pylori Infection and the Incidence of Bronchogenic Carcinoma

Authors: Jose R. Garcia, Lexi Frankel, Amalia Ardeljan, Sergio Medina, Ali Yasback, Omar Rashid

Abstract:

Background: Helicobacter pylori (H. pylori) is a gram-negative, spiral-shaped bacterium that affects nearly half of the population worldwide and humans serve as the principal reservoir. Infection rates usually follow an inverse relationship with hygiene practices and are higher in developing countries than developed countries. Incidence varies significantly by geographic area, race, ethnicity, age, and socioeconomic status. H. pylori is primarily associated with conditions of the gastrointestinal tract such as atrophic gastritis and duodenal peptic ulcers. Infection is also associated with an increased risk of carcinogenesis as there is evidence to show that H. pylori infection may lead to gastric adenocarcinoma and mucosa-associated lymphoid tissue (MALT) lymphoma. It is suggested that H. pylori infection may be considered as a systemic condition, leading to various novel associations with several different neoplasms such as colorectal cancer, pancreatic cancer, and lung cancer, although further research is needed. Emerging evidence suggests that H. pylori infection may offer protective effects against Mycobacterium tuberculosis as a result of non-specific induction of interferon- γ (IFN- γ). Similar methods of enhanced immunity may affect the development of bronchogenic carcinoma due to the antiproliferative, pro-apoptotic and cytostatic functions of IFN- γ. The purpose of this study was to evaluate the correlation between Helicobacter pylori infection and the incidence of bronchogenic carcinoma. Methods: The data was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to evaluate the patients infected versus patients not infected with H. pylori using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for the purpose of academic research. Standard statistical methods were used. Results:-Between January 2010 and December 2019, the query was analyzed and resulted in 163,224 in both the infected and control group, respectively. The two groups were matched by age range and CCI score. The incidence of bronchogenic carcinoma was 1.853% with 3,024 patients in the H. pylori group compared to 4.785% with 7,810 patients in the control group. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.367 (0.353 - 0.383) with a confidence interval of 95%. The two groups were matched by treatment and incidence of cancer, which resulted in a total of 101,739 patients analyzed after this match. The incidence of bronchogenic carcinoma was 1.929% with 1,962 patients in the H. pylori and treatment group compared to 4.618% with 4,698 patients in the control group with treatment. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.403 (0.383 - 0.425) with a confidence interval of 95%.

Keywords: bronchogenic carcinoma, helicobacter pylori, lung cancer, pathogen-associated molecular patterns

Procedia PDF Downloads 173
962 Unleashing the Potential of Green Finance in Architecture: A Promising Path for Balkan Countries

Authors: Luan Vardari, Dena Arapi Vardari

Abstract:

The Balkan countries, known for their diverse landscapes and cultural heritage, face the dual challenge of promoting economic growth while addressing pressing environmental concerns. In recent years, the concept of green finance has emerged as a powerful tool to achieve sustainable development and mitigate the environmental impact of various sectors, including architecture. This extended abstract explores the untapped potential of green finance in architecture within the Balkan region and highlights its role in driving sustainable construction practices and fostering a greener future. The abstract begins by defining green finance and emphasizing its relevance in the context of the architectural sector in Balkan countries. It underlines the benefits of green finance, such as economic growth, environmental conservation, and social well-being. Integrating green finance into architectural projects is important as a means to achieve sustainable development goals while promoting financial viability. Also, delves into the current state of green building practices in the Balkan countries and identifies the need for financial support to further drive adoption. It explores the existing regulatory frameworks and policies that promote sustainable architecture and discusses how green finance can complement these initiatives. Unique challenges faced by Balkan countries are highlighted, along with the potential opportunities that green finance presents in overcoming these challenges. We highlight successful sustainable architectural projects in the region to showcase the practical application of green finance in the Balkans. These projects exemplify the effective utilization of green finance mechanisms, resulting in tangible economic and environmental impacts, including job creation, energy efficiency, and reduced carbon emissions. The abstract concludes by identifying replicable models and lessons learned from these projects that can serve as a blueprint for future sustainable architecture initiatives in the Balkans. The importance of collaboration and knowledge sharing among stakeholders is emphasized. Engaging architects, financial institutions, governments, and local communities is crucial to promoting green finance in architecture. The abstract suggests the establishment of knowledge exchange platforms and regional/international networks to foster collaboration and facilitate the sharing of expertise among Balkan countries.

Keywords: sustainable finance, renewable energy, Balkan region, investment opportunities, green infrastructure, ESG criteria, architecture

Procedia PDF Downloads 49
961 Students' ExperiEnce Enhancement Through Simulaton. A Process Flow in Logistics and Transportation Field

Authors: Nizamuddin Zainuddin, Adam Mohd Saifudin, Ahmad Yusni Bahaudin, Mohd Hanizan Zalazilah, Roslan Jamaluddin

Abstract:

Students’ enhanced experience through simulation is a crucial factor that brings reality to the classroom. The enhanced experience is all about developing, enriching and applications of a generic process flow in the field of logistics and transportations. As educational technology has improved, the effective use of simulations has greatly increased to the point where simulations should be considered a valuable, mainstream pedagogical tool. Additionally, in this era of ongoing (some say never-ending) assessment, simulations offer a rich resource for objective measurement and comparisons. Simulation is not just another in the long line of passing fads (or short-term opportunities) in educational technology. It is rather a real key to helping our students understand the world. It is a way for students to acquire experience about how things and systems in the world behave and react, without actually touching them. In short, it is about interactive pretending. Simulation is all about representing the real world which includes grasping the complex issues and solving intricate problems. Therefore, it is crucial before stimulate the real process of inbound and outbound logistics and transportation a generic process flow shall be developed. The paper will be focusing on the validization of the process flow by looking at the inputs gains from the sample. The sampling of the study focuses on multi-national and local manufacturing companies, third party companies (3PL) and government agency, which are selected in Peninsular Malaysia. A simulation flow chart was proposed in the study that will be the generic flow in logistics and transportation. A qualitative approach was mainly conducted to gather data in the study. It was found out from the study that the systems used in the process of outbound and inbound are System Application Products (SAP) and Material Requirement Planning (MRP). Furthermore there were some companies using Enterprises Resources Planning (ERP) and Electronic Data Interchange (EDI) as part of the Suppliers Own Inventories (SOI) networking as a result of globalized business between one countries to another. Computerized documentations and transactions were all mandatory requirement by the Royal Custom and Excise Department. The generic process flow will be the basis of developing a simulation program that shall be used in the classroom with the objective of further enhanced the students’ learning experience. Thus it will contributes to the body of knowledge on the enrichment of the student’s employability and also shall be one of the way to train new workers in the logistics and transportation filed.

Keywords: enhancement, simulation, process flow, logistics, transportation

Procedia PDF Downloads 318
960 Histological Grade Concordance between Core Needle Biopsy and Corresponding Surgical Specimen in Breast Carcinoma

Authors: J. Szpor, K. Witczak, M. Storman, A. Orchel, D. Hodorowicz-Zaniewska, K. Okoń, A. Klimkowska

Abstract:

Core needle biopsy (CNB) is well established as an important diagnostic tool in diagnosing breast cancer and it is now considered the initial method of choice for diagnosing breast disease. In comparison to fine needle aspiration (FNA), CNB provides more architectural information allowing for the evaluation of prognostic and predictive factors for breast cancer, including histological grade—one of three prognostic factors used to calculate the Nottingham Prognostic Index. Several studies have previously described the concordance rate between CNB and surgical excision specimen in determination of histological grade (HG). The concordance rate previously ascribed to overall grade varies widely across literature, ranging from 59-91%. The aim of this study is to see how the data looks like in material at authors’ institution and are the results as compared to those described in previous literature. The study population included 157 women with a breast tumor who underwent a core needle biopsy for breast carcinoma and a subsequent surgical excision of the tumor. Both materials were evaluated for the determination of histological grade (scale from 1 to 3). HG was assessed only in core needle biopsies containing at least 10 well preserved HPF with invasive tumor. The degree of concordance between CNB and surgical excision specimen for the determination of tumor grade was assessed by Cohen’s kappa coefficient. The level of agreement between core needle biopsy and surgical resection specimen for overall histologic grading was 73% (113 of 155 cases). CNB correctly predicted the grade of the surgical excision specimen in 21 cases for grade 1 tumors (Kappa coefficient κ = 0.525 95% CI (0.3634; 0.6818), 52 cases for grade 2 (Kappa coefficient κ = 0.5652 95% CI (0.458; 0.667) and 40 cases for stage 3 tumors (Kappa coefficient κ = 0.6154 95% CI (0.4862; 0.7309). The highest level of agreement was observed in grade 3 malignancies. In 9 of 42 (21%) discordant cases, the grade was higher in the CNB than in the surgical excision. This composed 6% of the overall discordance. These results correspond to the noted in the literature, showing that underestimation occurs more frequently than overestimation. This study shows that authors’ institution’s histologic grading of CNBs and surgical excisions shows a fairly good correlation and is consistent with findings in previous reports. Despite the inevitable limitations of CNB, CNB is an effective method for diagnosing breast cancer and managing treatment options. Assessment of tumour grade by CNB is useful for the planning of treatment, so in authors’ opinion it is worthy to implement it in daily practice.

Keywords: breast cancer, concordance, core needle biopsy, histological grade

Procedia PDF Downloads 213
959 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 128
958 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 107