Search results for: monotonically decreasing parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3168

Search results for: monotonically decreasing parameter

438 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels

Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina

Abstract:

The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.

Keywords: contamination, gas transfer, surfactants, turbulence

Procedia PDF Downloads 286
437 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire

Authors: Vinay A. Sharma, Shiva Prasad H. C.

Abstract:

The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.

Keywords: decision-making, leadership, logic, strategic management

Procedia PDF Downloads 93
436 Effects of the Age, Education, and Mental Illness Experience on Depressive Disorder Stigmatization

Authors: Soowon Park, Min-Ji Kim, Jun-Young Lee

Abstract:

Motivation: The stigma of mental illness has been studied in many disciplines, including social psychology, counseling psychology, sociology, psychiatry, public health care, and related areas, because individuals labeled as ‘mentally ill’ are often deprived of their rights and their life opportunities. To understand the factors that deepen the stigma of mental illness, it is important to understand the influencing factors of the stigma. Problem statement: Depression is a common disorder in adults, but the incidence of help-seeking is low. Researchers have believed that this poor help-seeking behavior is related to the stigma of mental illness, which results from low mental health literacy. However, it is uncertain that increasing mental health literacy decreases mental health stigmatization. Furthermore, even though decreasing stigmatization is important, the stigma of mental illness is still a stable and long-lasting phenomenon. Thus, factors other than knowledge about mental disorders have the power to maintain the stigma. Investigating the influencing factors that facilitate the stigma of psychiatric disease could help lower the social stigmatization. Approach: Face-to-face interviews were conducted with a multi-clustering sample. A total of 700 Korean participants (38% male), ranging in age from 18 to 78 (M(SD)age= 48.5(15.7)) answered demographical questions, Korean version of Link’s Perceived Devaluation and Discrimination (PDD) scale for the assessment of social stigmatization against depression, and the Korean version of the WHO-Composite International Diagnostic Interview for the assessment of mental disorders. Multiple-regression was conducted to find the predicting factors of social stigmatization against depression. Ages, sex, years of education, income, living location, and experience of mental illness were used as the predictors. Results: Predictors accounted for 14% of the variance in the stigma of depressive disorders (F(6, 693) = 20.27, p < .001). Among those, only age, years of education, and experience of mental illness significantly predicted social stigmatization against depression. The standardized regression coefficient of age had a negative association with stigmatization (β = -.20, p < .001), but years of education (β = .20, p < .001) and experience of mental illness (β = .08, p < .05) positively predicted depression stigmatization. Conclusions: The present study clearly demonstrates the association between personal factors and depressive disorder stigmatization. Younger age, more education, and self-stigma appeared to increase the stigmatization. Young, highly educated, and mentally ill people tend to reject patients with depressive disorder as friends, teachers, or babysitters; they also tend to think that those patients have lower intelligence and abilities. These results suggest the possibility that people from a high social class, or highly educated people, who have the power to make decisions, help maintain the social stigma against mental illness patients. To increase the awareness that people from high social classes have more stigmatization against depressive disorders will help decrease the biased attitudes against mentally ill patients.

Keywords: depressive disorder stigmatization, age, education, self-stigma

Procedia PDF Downloads 386
435 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 283
434 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings

Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller

Abstract:

Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.

Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram

Procedia PDF Downloads 252
433 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation

Authors: Kausar Harun, Ahmad Azmin Mohamad

Abstract:

Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.

Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles

Procedia PDF Downloads 296
432 Sexual Health And Male Fertility: Improving Sperm Health With Focus On Technology

Authors: Diana Peninger

Abstract:

Over 10% of couples in the U.S. have infertility problems, with roughly 40% traceable to the male partner. Yet, little attention has been given to improving men’s contribution to the conception process. One solution that is showing promise in increasing conception rates for IVF and other assisted reproductive technology treatments is a first-of-its-kind semen collection that has been engineered to mitigate sperm damage caused by traditional collection methods. Patients are able to collect semen at home and deliver to clinics within 48 hours for use in fertility analysis and treatment, with less stress and improved specimen viability. This abstract will share these findings along with expert insight and tips to help attendees understand the key role sperm collection plays in addressing and treating reproductive issues, while helping to improve patient outcomes and success. Our research was to determine if male reproductive outcomes can be increased by improving sperm specimen health with a focus on technology. We utilized a redesigned semen collection cup (patented as the Device for Improved Semen Collection/DISC—U.S. Patent 6864046 – known commercially as a ProteX) that met a series of physiological parameters. Previous research demonstrated significant improvement in semen perimeters (motility forward, progression, viability, and longevity) and overall sperm biochemistry when the DISC is used for collection. Animal studies have also shown dramatic increases in pregnancy rates. Our current study compares samples collected in the DISC, next-generation DISC (DISCng), and a standard specimen cup (SSC), dry, with the 1 mL measured amount of media and media in excess ( 5mL). Both human and animal testing will be included. With sperm counts declining at alarming rates due to environmental, lifestyle, and other health factors, accurate evaluations of sperm health are critical to understanding reproductive health, origins, and treatments of infertility. An increase in the health of the sperm as measured by extensive semen parameter analysis and improved semen parameters stable for 48 hours, expanding the processing time from 1 hour to 48 hours were also demonstrated.

Keywords: reprodutive, sperm, male, infertility

Procedia PDF Downloads 116
431 Solution-Focused Wellness: An Evidence-Based Approach to Wellness Promotion

Authors: James Beauchemin

Abstract:

Research indicates that college students are experiencing mental health challenges of greater severity, and an increased number of students are seeking help. Contributing to the compromised wellness of the college student population are the prevalence of unhealthy lifestyle habits and behaviors such as alcohol consumption, tobacco use, dietary concerns, risky sexual behaviors, and lack of physical activity. Alternative approaches are needed for this population that emphasize prevention and holistic lifestyle change that mitigate mental health and wellness challenges and alleviate strain on campus resources. This presentation will introduce a Solution-Focused Wellness (SFW) intervention model and examine wellness domains solution-focused strategies to promote personal well-being, and provide supporting research from multiple studies that illustrate intervention effectiveness with a collegiate population. Given the subjective and personal nature of wellness, a therapeutic approach that provides the opportunity for individuals to conceptualize and operationalize wellness themselves is critical to facilitating lasting wellness-based change. Solution-Focused Brief Therapy (SFBT) is a strength-based modality defined by its emphasis on constructing solutions rather than focusing on problems and the assumption that clients have the resources and capacity to change. SFBT has demonstrated effectiveness as a brief therapeutic intervention with the college population in groups and related to health and wellness. By integrating SFBT strategies with personal wellness, a brief intervention was developed to support college students in establishing lifestyles trends consistent with their conceptualizations of wellness. Research supports the effectiveness of a SFW model in improving college student wellness in both face-to-face and web-based formats. Outcomes of controlled and longitudinal studies will be presented, demonstrating significant improvements in perceptions of stress, life satisfaction, happiness, mental health, well-being, and resilience. Overall, there is compelling evidence that utilization of a Solution-Focused Brief Therapy approach with college students can help to improve personal wellness and establish healthy lifestyle trends, providing an effective prevention-focused strategy for college counseling centers and wellness centers to employ. Primary research objectives include: 1)establish an evidence-based approach to facilitating wellness pro motion among the college student population, 2) examine the effectiveness of a Solution-Focused Wellness (SFW) intervention model in decreasing stress, improving personal wellness, mental health, life satisfaction, and resiliency,3) investigate intervention impacts over time (e.g. 6-week post-intervention), and 4) demonstrate SFW intervention utility in wellness promotion and associated outcomes when compared with no-treatment control, and alternative intervention approaches.

Keywords: wellness, college students, solution-focused, prevention

Procedia PDF Downloads 53
430 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 133
429 Optimizing Fermented Paper Production Using Spyrogira sp. Interpolating with Banana Pulp

Authors: Hadiatullah, T. S. D. Desak Ketut, A. A. Ayu, A. N. Isna, D. P. Ririn

Abstract:

Spirogyra sp. is genus of microalgae which has a high carbohydrate content that used as a best medium for bacterial fermentation to produce cellulose. This study objective to determine the effect of pulp banana in the fermented paper production process using Spirogyra sp. and characterizing of the paper product. The method includes the production of bacterial cellulose, assay of the effect fermented paper interpolating with banana pulp using Spirogyra sp., and the assay of paper characteristics include gram-mage paper, water assay absorption, thickness, power assay of tensile resistance, assay of tear resistance, density, and organoleptic assay. Experiments were carried out with completely randomized design with a variation of the concentration of sewage treatment in the fermented paper production interpolating banana pulp using Spirogyra sp. Each parameter data to be analyzed by Anova variance that continued by real difference test with an error rate of 5% using the SPSS. Nata production results indicate that different carbon sources (glucose and sugar) did not show any significant differences from cellulose parameters assay. Significantly different results only indicated for the control treatment. Although not significantly different from the addition of a carbon source, sugar showed higher potency to produce high cellulose. Based on characteristic assay of the fermented paper showed that the paper gram-mage indicated that the control treatment without interpolation of a carbon source and a banana pulp have better result than banana pulp interpolation. Results of control gram-mage is 260 gsm that show optimized by cardboard. While on paper gram-mage produced with the banana pulp interpolation is about 120-200 gsm that show optimized by magazine paper and art paper. Based on the density, weight, water absorption assays, and organoleptic assay of paper showing the highest results in the treatment of pulp banana interpolation with sugar source as carbon is 14.28 g/m2, 0.02 g and 0.041 g/cm2.minutes. The conclusion found that paper with nata material interpolating with sugar and banana pulp has the potential formulation to produce super-quality paper.

Keywords: cellulose, fermentation, grammage, paper, Spirogyra sp.

Procedia PDF Downloads 320
428 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 80
427 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 244
426 Comparison of Yb and Tm-Fiber Laser Cutting Processes of Fiber Reinforced Plastics

Authors: Oktay Celenk, Ugur Karanfil, Iskender Demir, Samir Lamrini, Jorg Neumann, Arif Demir

Abstract:

Due to its favourable material characteristics, fiber reinforced plastics are amongst the main topics of all actual lightweight construction megatrends. Especially in transportation trends ranging from aeronautics over the automotive industry to naval transportation (yachts, cruise liners) the expected economic and environmental impact is huge. In naval transportation components like yacht bodies, antenna masts, decorative structures like deck lamps, light houses and pool areas represent cheap and robust solutions. Commercially available laser tools like carbon dioxide gas lasers (CO₂), frequency tripled solid state UV lasers, and Neodymium-YAG (Nd:YAG) lasers can be used. These tools have emission wavelengths of 10 µm, 0.355 µm, and 1.064 µm, respectively. The scientific goal is first of all the generation of a parameter matrix for laser processing of each used material for a Tm-fiber laser system (wavelength 2 µm). These parameters are the heat affected zone, process gas pressure, work piece feed velocity, intensity, irradiation time etc. The results are compared with results obtained with well-known material processing lasers, such as a Yb-fiber lasers (wavelength 1 µm). Compared to the CO₂-laser, the Tm-laser offers essential advantages for future laser processes like cutting, welding, ablating for repair and drilling in composite part manufacturing (components of cruise liners, marine pipelines). Some of these are the possibility of beam delivery in a standard fused silica fiber which enables hand guided processing, eye safety which results from the wavelength, excellent beam quality and brilliance due to the fiber nature. There is one more feature that is economically absolutely important for boat, automotive and military projects manufacturing that the wavelength of 2 µm is highly absorbed by the plastic matrix and thus enables selective removal of it for repair procedures.

Keywords: Thulium (Tm) fiber laser, laser processing of fiber-reinforced plastics (FRP), composite, heat affected zone

Procedia PDF Downloads 182
425 Analysis of Friction Stir Welding Process for Joining Aluminum Alloy

Authors: A. M. Khourshid, I. Sabry

Abstract:

Friction Stir Welding (FSW), a solid state joining technique, is widely being used for joining Al alloys for aerospace, marine automotive and many other applications of commercial importance. FSW were carried out using a vertical milling machine on Al 5083 alloy pipe. These pipe sections are relatively small in diameter, 5mm, and relatively thin walled, 2 mm. In this study, 5083 aluminum alloy pipe were welded as similar alloy joints using (FSW) process in order to investigate mechanical and microstructural properties .rotation speed 1400 r.p.m and weld speed 10,40,70 mm/min. In order to investigate the effect of welding speeds on mechanical properties, metallographic and mechanical tests were carried out on the welded areas. Vickers hardness profile and tensile tests of the joints as a metallurgical feasibility of friction stir welding for joining Al 6061 aluminum alloy welding was performed on pipe with different thickness 2, 3 and 4 mm,five rotational speeds (485,710,910,1120 and 1400) rpm and a traverse speed (4, 8 and 10)mm/min was applied. This work focuses on two methods such as artificial neural networks using software (pythia) and response surface methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminum alloy. An artificial neural network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. The tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters Tool rotation speed, material thickness and travel speed as a function. A comparison was made between measured and predicted data. Response surface methodology (RSM) also developed and the values obtained for the response Tensile strengths, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameter on mechanical properties of 6061 aluminum alloy has been analyzed in detail.

Keywords: friction stir welding (FSW), al alloys, mechanical properties, microstructure

Procedia PDF Downloads 446
424 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.

Keywords: resilience, protective factors, teachers, item response theory

Procedia PDF Downloads 72
423 Candida antartica Lipase Assisted Enrichment of n-3 PUFA in Indian Sardine Oil

Authors: Prasanna Belur, P. R. Ashwini, Sampath Charanyaa, I. Regupathi

Abstract:

Indian oil sardine (Sardinella longiceps) are one of the richest and cheapest sources of n-3 polyunsaturated fatty acids (n-3 PUFA) such as Eicosapentaenoic acid (EPA) and Docosahexaenoic acid (DHA). The health benefits conferred by n-3 PUFA upon consumption, in the prevention and treatment of coronary, neuromuscular, immunological disorders and allergic conditions are well documented. Natural refined Indian Sardine oil generally contain about 25% (w/w) n-3 PUFA along with various unsaturated and saturated fatty acids in the form of mono, di, and triglycerides. Having high concentration of n-3 PUFA content in the glyceride form is most desirable for human consumption to avail maximum health benefits. Thus, enhancing the n-3 PUFA content while retaining it in the glyceride form with green technology is the need of the hour. In this study, refined Indian Sardine oil was subjected to selective hydrolysis by Candida antartica lipase to enhance n-3 PUFA content. The degree of hydrolysis and enhancement of n-3 PUFA content was estimated by determining acid value, Iodine value, EPA and DHA content (by Gas Chromatographic methods after derivitization) before and after hydrolysis. Various reaction parameters such as pH, temperature, enzyme load, lipid to aqueous phase volume ratio and incubation time were optimized by conducting trials with one parameter at a time approach. Incubating enzyme solution with refined sardine oil with a volume ratio of 1:1, at pH 7.0, for 60 minutes at 50 °C, with an enzyme load of 60 mg/ml was found to be optimum. After enzymatic treatment, the oil was subjected to refining to remove free fatty acids and moisture content using previously optimized refining technology. Enzymatic treatment at the optimal conditions resulted in 12.11 % enhancement in Degree of hydrolysis. Iodine number had increased by 9.7 % and n-3 PUFA content was enhanced by 112 % (w/w). Selective enhancement of n-3 PUFA glycerides, eliminating saturated and unsaturated fatty acids from the oil using enzyme is an interesting preposition as this technique is environment-friendly, cost effective and provide natural source of n-3 PUFA rich oil.

Keywords: Candida antartica, lipase, n-3 polyunsaturated fatty acids, sardine oil

Procedia PDF Downloads 210
422 The Review and Contribution of Taiwan Government Policies on Environmental Impact Assessment to Water Recycling

Authors: Feng-Ming Fan, Xiu-Hui Wen, Po-Feng Chen, Yi-Ching Tu

Abstract:

Because of inborn natural conditions and man-made sabotage, the water resources insufficient phenomenon in Taiwan is a very important issue needed to face immediately. The regulations and law of water resources protection and recycling are gradually completed now but still lack of specific water recycling effectiveness checking method. The research focused on the industrial parks that already had been certificated with EIA to establish a professional checking system, carry through and forge ahead to contribute one’s bit in water resources sustainable usage. Taiwan Government Policies of Environmental Impact Assessment established in 1994, some development projects were requested to set certain water recycling ratio for water resources effective usage. The water covers and contains everything because all-inclusive companies enter and be stationed. For control the execution status of industrial park water and waste water recycling ratio about EIA commitment effectively, we invited experts and scholars in this filed to discuss with related organs to formulate the policy and audit plan. Besides, call a meeting to set public version water equilibrium diagrams and recycles parameter. We selected nine industrial parks that were requested set certain water recycling ratio in EIA examination stage and then according to the water usage quantity, we audited 340 factories in these industrial parks with spot and documents examination and got fruitful results – the average water usage of unit area per year of all these examined industrial parks is 31,000 tons/hectare/year, the value is just half of Taiwan industries average. It is obvious that the industrial parks with EIA commitment can decrease the water resources consumption effectively. Taiwan government policies of Environmental Impact Assessment took follow though tracking function into consideration at the beginning. The results of this research verify the importance of the implementing with water recycling to save water resources in EIA commitment. Inducing development units to follow EIA commitment to get the balance between environmental protection and economic development is one of the important EIA value.

Keywords: Taiwan government policies of environmental impact assessment, water recycling ratio of EIA commitment, water resources sustainable usage, water recycling

Procedia PDF Downloads 206
421 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine

Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski

Abstract:

Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: electric vehicle, power generator, range extender, Wankel engine

Procedia PDF Downloads 140
420 Crop Breeding for Low Input Farming Systems and Appropriate Breeding Strategies

Authors: Baye Berihun Getahun, Mulugeta Atnaf Tiruneh, Richard G. F. Visser

Abstract:

Resource-poor farmers practice low-input farming systems, and yet, most breeding programs give less attention to this huge farming system, which serves as a source of food and income for several people in developing countries. The high-input conventional breeding system appears to have failed to adequately meet the needs and requirements of 'difficult' environments operating under this system. Moreover, the unavailability of resources for crop production is getting for their peaks, the environment is maltreated by excessive use of agrochemicals, crop productivity reaches its plateau stage, particularly in the developed nations, the world population is increasing, and food shortage sustained to persist for poor societies. In various parts of the world, genetic gain at the farmers' level remains low which could be associated with low adoption of crop varieties, which have been developed under high input systems. Farmers usually use their local varieties and apply minimum inputs as a risk-avoiding and cost-minimizing strategy. This evidence indicates that the conventional high-input plant breeding system has failed to feed the world population, and the world is moving further away from the United Nations' goals of ending hunger, food insecurity, and malnutrition. In this review, we discussed the rationality of focused breeding programs for low-input farming systems and, the technical aspect of crop breeding that accommodates future food needs and its significance for developing countries in the decreasing scenario of resources required for crop production. To this end, the application of exotic introgression techniques like polyploidization, pan-genomics, comparative genomics, and De novo domestication as a pre-breeding technique has been discussed in the review to exploit the untapped genetic diversity of the crop wild relatives (CWRs). Desired recombinants developed at the pre-breeding stage are exploited through appropriate breeding approaches such as evolutionary plant breeding (EPB), rhizosphere-related traits breeding, and participatory plant breeding approaches. Populations advanced through evolutionary breeding like composite cross populations (CCPs) and rhizosphere-associated traits breeding approach that provides opportunities for improving abiotic and biotic soil stress, nutrient acquisition capacity, and crop microbe interaction in improved varieties have been reviewed. Overall, we conclude that low input farming system is a huge farming system that requires distinctive breeding approaches, and the exotic pre-breeding introgression techniques and the appropriate breeding approaches which deploy the skills and knowledge of both breeders and farmers are vital to develop heterogeneous landrace populations, which are effective for farmers practicing low input farming across the world.

Keywords: low input farming, evolutionary plant breeding, composite cross population, participatory plant breeding

Procedia PDF Downloads 25
419 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 190
418 Diversified Farming and Agronomic Interventions Improve Soil Productivity, Soybean Yield and Biomass under Soil Acidity Stress

Authors: Imran, Murad Ali Rahat

Abstract:

One of the factors affecting crop production and nutrient availability is acidic stress. The most important element decreasing under acidic stress conditions is phosphorus deficiency, which results in stunted growth and yield because of inefficient nutrient cycling. At the Agriculture Research Institute Mingora Swat, Pakistan, tests were carried out for the first time throughout the course of two consecutive summer seasons in 2016 (year 1) and 2017 (year 2) with the goal of increasing crop productivity and nutrient availability under acidic stress. Three organic supplies (peach nano-black carbon, compost, and dry-based peach wastes), three phosphorus rates, and two advantageous microorganisms (Trichoderma and PSB) were incorporated in the experimental treatments. The findings showed that, in conditions of acid stress, peach organic sources had a significant impact on yield and yield components. The application of nano-black carbon produced the greatest thousand seed weight of 164.6 g among organic sources, however the use of phosphorus solubilizing bacteria (PSB) for seed inoculation increased the thousand seed weight of beneficial microbes when compared to Trichoderma soil application. The thousand seed weight was significantly impacted by the quantities of phosphorus. The treatment of 100 kg P ha-1 produced the highest thousand seed weight (167.3 g), which was followed by 75 kg P ha-1 (162.5 g). Compost amendments provided the highest seed yield (2,140 kg ha-1) and were comparable to the application of nano-black carbon (2,120 kg ha-1). With peach residues, the lowest seed output (1,808 kg ha-1) was observed.Compared to seed inoculation with PSB (1,913 kg ha-1), soil treatment with Trichoderma resulted in the maximum seed production (2,132 kg ha-1). Applying phosphorus to the soybean crop greatly increased its output. The highest seed yield (2,364 kg ha-1) was obtained with 100 kg P ha-1, which was comparable to 75 kg P ha-1 (2,335 kg ha-1), while the lowest seed yield (1,569 kg ha-1) was obtained with 50 kg P ha-1. The average values showed that compared to control plots (3.3 g kg-1), peach organic sources produced greatest SOC (10.0 g kg-1). Plots with treated soil had a maximum soil P of 19.7 mg kg-1, while plots under stress had a maximum soil P of 4.8 mg kg-1. While peach compost resulted in the lowest soil P levels, peach nano-black carbon yielded the highest soil P levels (21.6 mg kg-1). Comparing beneficial bacteria with PSB to Trichoderma (18.3 mg/kg-1), the former also shown an improvement in soil P (21.1 mg kg-1). Regarding P treatments, the application of 100 kg P per ha produced significantly higher soil P values (26.8 mg /kg-1), followed by 75 kg P per ha (18.3 mg /kg-1), and 50 kg P ha-1 produced the lowest soil P values (14.1 mg /kg-1). Comparing peach wastes and compost to peach nano-black carbon (13.7 g kg-1), SOC rose. In contrast to PSB (8.8 g kg-1), soil-treated Trichoderma was shown to have a greater SOC (11.1 g kg-1). Higher among the P levels.

Keywords: acidic stress, trichoderma, beneficial microbes, nano-black carbon, compost, peach residues, phosphorus, soybean

Procedia PDF Downloads 51
417 Effect of Non-Regulated pH on the Dynamics of Dark Fermentative Biohydrogen Production with Suspended and Immobilized Cell Culture

Authors: Joelle Penniston, E. B. Gueguim-Kana

Abstract:

Biohydrogen has been identified as a promising alternative to the use of non-renewable fossil reserves, owing to its sustainability and non-polluting nature. pH is considered as a key parameter in fermentative biohydrogen production processes, due to its effect on the hydrogenase activity, metabolic activity as well as substrate hydrolysis. The present study assesses the influence of regulating pH on dark fermentative biohydrogen production. Four experimental hydrogen production schemes were evaluated. Two were implemented using suspended cells under regulated pH growth conditions (Sus_R) and suspended and non-regulated pH (Sus_N). The two others regimes consisted of alginate immobilized cells under pH regulated growth conditions (Imm_R) and immobilized and non-pH regulated conditions (Imm_N). All experiments were carried out at 37.5°C with glucose as sole source of carbon. Sus_R showed a lag time of 5 hours and a peak hydrogen fraction of 36% and a glucose degradation of 37%, compared to Sus_N which showed a peak hydrogen fraction of 44% and complete glucose degradation. Both suspended culture systems showed a higher peak biohydrogen fraction compared to the immobilized cell system. Imm_R experiments showed a lag phase of 8 hours, a peak biohydrogen fraction of 35%, while Imm_N showed a lag phase of 5 hours, a peak biohydrogen fraction of 22%. 100% glucose degradation was observed in both pH regulated and non-regulated processes. This study showed that biohydrogen production in batch mode with suspended cells in a non-regulated pH environment results in a partial degradation of substrate, with lower yield. This scheme has been the culture mode of choice for most reported studies in biohydrogen research. The relatively lower slope in pH trend of the non-regulated pH experiment with immobilized cells (Imm_N) compared to Sus_N revealed that that immobilized systems have a better buffering capacity compared to suspended systems, which allows for the extended production of biohydrogen even under non-regulated pH conditions. However, alginate immobilized cultures in flask systems showed some drawbacks associated to high rate of gas production that leads to increased buoyancy of the immobilization beads. This ultimately impedes the release of gas out of the flask.

Keywords: biohydrogen, sustainability, suspended, immobilized

Procedia PDF Downloads 328
416 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling

Authors: Ghita Benayad

Abstract:

Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.

Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market

Procedia PDF Downloads 30
415 Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico

Authors: Cesar Augusto Hoil Rosas, José Luis Vázquez Burgos, José Juan Carbajal Hernandez

Abstract:

White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.

Keywords: Chirostoma estor estor, computational system, lab view, white fish

Procedia PDF Downloads 309
414 Exergetic Optimization on Solid Oxide Fuel Cell Systems

Authors: George N. Prodromidis, Frank A. Coutelieris

Abstract:

Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.

Keywords: biogas, exergy, efficiency, optimization

Procedia PDF Downloads 353
413 Study on Effectiveness of Strategies to Re-Establish Landscape Connectivity of Expressways with Reference to Southern Expressway Sri Lanka

Authors: N. G. I. Aroshana, S. Edirisooriya

Abstract:

Construction of highway is the most emerging development tendency in Sri Lanka. With these development activities, there are a lot of environmental and social issues started. Landscape fragmentation is one of the main issues that highly effect to the environment by the construction of expressways. Sri Lankan expressway system getting effort to treat fragmented landscape by using highway crossing structures. This paper designates, a highway post construction landscape study on the effectiveness of the landscape connectivity structures to restore connectivity. Geographic Information Systems (GIS), least cost path tool has been used in the selected two plots; 25km alone the expressway to identify animal crossing paths. Animal accident data use as measure for determining the most contributed plot for landscape connectivity. Number of patches, Mean patch size, Class area use as a parameter to determine the most effective land use class to reestablish the landscape connectivity. The findings of the research express scrub, grass and marsh were the most positively affected land use typologies for increase the landscape connectivity. It represents the growth increased by 8% within the 12 years of time. From the least cost analysis within the plot one, 28.5% of total animal crossing structures are within the high resistance land use classes. Southern expressway used reinforced compressed earth technologies for construction. It has been controlled the growth of the climax community. According to all findings, it could assume that involvement of the landscape crossing structures contributes to re-establish connectivity, but it is not enough to restore the majority of disturbance performed by the expressway. Connectivity measures used within the study can use as a tool for re-evaluate future involvement of highway crossing structures. Proper placement of the highway crossing structures leads to increase the rate of connectivity. The study recommends that monitoring the all stages (preconstruction, construction and post construction) of the project and preliminary design, and the involvement of the research applied connectivity assessment strategies helps to overcome the complication regarding the re-establishment of landscape connectivity using the highway crossing structures that facilitate the growth of flora and fauna.

Keywords: landscape fragmentation, least cost path, land use analysis, landscape connectivity structures

Procedia PDF Downloads 137
412 Tuberculosis (TB) and Lung Cancer

Authors: Asghar Arif

Abstract:

Lung cancer has been recognized as one of the greatest common cancers, causing the annual mortality rate of about 1.2 million people in the world. Lung cancer is the most prevalent cancer in men and the third-most common cancer among women (after breast and digestive cancers).Recent evidences have shown the inflammatory process as one of the potential factors of cancer. Tuberculosis (TB), pneumonia, and chronic bronchitis are among the most important inflammation-inducing factors in the lungs, among which TB has a more profound role in the emergence of cancer.TB is one of the important mortality factors throughout the world, and 205,000 death cases are reported annually due to this disease. Chronic inflammation and fibrosis due to TB can induce genetic mutation and alternations. Parenchyma tissue of lung is involved in both diseases of TB and lung cancer, and continuous cough in lung cancer, morphological vascular variations, lymphocytosis processes, and generation of immune system mediators such as interleukins, are all among the factors leading to the hypothesis regarding the role of TB in lung cancer Some reports have shown that the induction of necrosis and apoptosis or TB reactivation, especially in patients with immune-deficiency, may result in increasing IL-17 and TNF_α, which will either decrease P53 activity or increase the expression of Bcl-2, decrease Bax-T, and cause the inhibition of caspase-3 expression due to decreasing the expression of mitochondria cytochrome oxidase. It has been also indicated that following the injection of BCG vaccine, the host immune system will be reinforced, and in particular, the rates of gamma interferon, nitric oxide, and interleukin-2 are increased. Therefore, CD4 + lymphocyte function will be improved, and the person will be immune against cancer.Numerous prospective studies have so far been conducted on the role of TB in lung cancer, and it seems that this disease is effective in that particular cancer.One of the main challenges of lung cancer is its correct and timely diagnosis. Unfortunately, clinical symptoms (such as continuous cough, hemoptysis, weight loss, fever, chest pain, dyspnea, and loss of appetite) and radiological images are similar in TB and lung cancer. Therefore, anti-TB drugs are routinely prescribed for the patients in the countries with high prevalence of TB, like Pakistan. Regarding the similarity in clinical symptoms and radiological findings of lung cancer, proper diagnosis is necessary for TB and respiratory infections due to nontuberculousmycobacteria (NTM). Some of the drug resistive TB cases are, in fact, lung cancer or NTM lung infections. Acid-fast staining and histological study of phlegm and bronchial washing, culturing and polymerase chain reaction TB are among the most important solutions for differential diagnosis of these diseases. Briefly, it is assumed that TB is one of the risk factors for cancer. Numerous studies have been conducted in this regard throughout the world, and it has been observed that there is a significant relationship between previous TB infection and lung cancer. However, to prove this hypothesis, further and more extensive studies are required. In addition, as the clinical symptoms and radiological findings of TB, lung cancer, and non-TB mycobacteria lung infections are similar, they can be misdiagnosed as TB.

Keywords: TB and lung cancer, TB people, TB servivers, TB and HIV aids

Procedia PDF Downloads 62
411 Development and Characterization of Cathode Materials for Sodium-Metal Chloride Batteries

Authors: C. D’Urso, L. Frusteri, M. Samperi, G. Leonardi

Abstract:

Solid metal halides are used as active cathode ingredients in the case of Na-NiCl2 batteries that require a fused secondary electrolyte, sodium tetrachloraluminate (NaAlCl4), to facilitate the movement of the Na+ ion into the cathode. The sodium-nickel chloride (Na - NiCl2) battery has been extensively investigated as a promising system for large-scale energy storage applications. The growth of Ni and NaCl particles in the cathodes is one of the most important factors that degrade the performance of the Na-NiCl2 battery. The larger the particles of active ingredients contained in the cathode, the smaller the active surface available for the electrochemical reaction. Therefore, the growth of Ni and NaCl particles can lead to an increase in cell polarization resulting from the reduced active area. A higher current density, a higher state of charge (SOC) at the end of the charge (EOC) and a lower Ni / NaCl ratio are the main parameters that result in the rapid growth of Ni particles. In light of these problems, cathode and chemistry Nano-materials with recognized and well-documented electrochemical functions have been studied and manufactured to simultaneously improve battery performance and develop less expensive and more performing, sustainable and environmentally friendly materials. Starting from the well-known cathodic material (Na-NiCl2), the new electrolytic materials have been prepared on the replacement of nickel with iron (10-90%substitution of Nichel with Iron), to obtain a new material with potential advantages compared to current battery technologies; for example,, (1) lower cost of cathode material compared to state of the art as well as (2) choices of cheaper materials (stainless steels could be used for cell components, including cathode current collectors and cell housings). The study on the particle size of the cathode and the physicochemical characterization of the cathode was carried out in the test cell using, where possible, the GITT method (galvanostatic technique of intermittent titration). Furthermore, the impact of temperature on the different cathode compositions of the positive electrode was studied. Especially the optimum operating temperature is an important parameter of the active material.

Keywords: critical raw materials, energy storage, sodium metal halide, battery

Procedia PDF Downloads 89
410 ATR-IR Study of the Mechanism of Aluminum Chloride Induced Alzheimer Disease - Curative and Protective Effect of Lepidium sativum Water Extract on Hippocampus Rats Brain Tissue

Authors: Maha J. Balgoon, Gehan A. Raouf, Safaa Y. Qusti, Soad S. Ali

Abstract:

The main cause of Alzheimer disease (AD) was believed to be mainly due to the accumulation of free radicals owing to oxidative stress (OS) in brain tissue. The mechanism of the neurotoxicity of Aluminum chloride (AlCl3) induced AD in hippocampus Albino wister rat brain tissue, the curative & the protective effects of Lipidium sativum group (LS) water extract were assessed after 8 weeks by attenuated total reflection spectroscopy ATR-IR and histologically by light microscope. ATR-IR results revealed that the membrane phospholipid undergo free radical attacks, mediated by AlCl3, primary affects the polyunsaturated fatty acids indicated by the increased of the olefinic -C=CH sub-band area around 3012 cm-1 from the curve fitting analysis. The narrowing in the half band width(HBW) of the sνCH2 sub-band around 2852 cm-1 due to Al intoxication indicates the presence of trans form fatty acids rather than gauch rotomer. The degradation of hydrocarbon chain to shorter chain length, increasing in membrane fluidity, disorder and decreasing in lipid polarity in AlCl3 group were indicated by the detected changes in certain calculated area ratios compared to the control. Administration of LS was greatly improved these parameters compared to the AlCl3 group. Al influences the Aβ aggregation and plaque formation, which in turn interferes to and disrupts the membrane structure. The results also showed a marked increase in the β-parallel and antiparallel structure, that characterize the Aβ formation in Al-induced AD hippocampal brain tissue, indicated by the detected increase in both amide I sub-bands around 1674, 1692 cm-1. This drastic increase in Aβ formation was greatly reduced in the curative and protective groups compared to the AlCl3 group and approaches nearly the control values. These results were supported too by the light microscope. AlCl3 group showed significant marked degenerative changes in hippocampal neurons. Most cells appeared small, shrieked and deformed. Interestingly, the administration of LS in curative and protective groups markedly decreases the amount of degenerated cells compared to the non-treated group. Also the intensity of congo red stained cells was decreased. Hippocampal neurons looked more/or less similar to those of control. This study showed a promising therapeutic effect of Lipidium sativum group (LS) on AD rat model that seriously overcome the signs of oxidative stress on membrane lipid and restore the protein misfolding.

Keywords: aluminum chloride, alzheimer disease, ATR-IR, Lipidium sativum

Procedia PDF Downloads 350
409 A Gendered Perspective of the Influence of Public Transport Infrastructural Design on Accessibility

Authors: Ajeni Ari, Chiara Maria Leva, Lorraine D’Arcy, Mary Kinahan

Abstract:

In addressing gender and transport, considerations of mobility disparities amongst users are important. Public transport (PT) policy and design do not efficiently account for the varied mobility practices between men and women, with literature only recently showing a movement towards gender inclusion in transport. Arrantly, transport policy and designs remain gender-blind to the variation of mobility needs. The global movement towards sustainability highlights the need for expeditious strategies that could mitigate biases within the existing system. At the forefront of such a plan of action, in part, may be mandated inclusive infrastructural designs that stimulate user engagement with the transport system. Fundamentally access requires a means or an opportunity for the entity, which for PT is an establishment of its physical environment and/or infrastructural design. Its practicality may be utilised with knowledge of shortcomings in tangible or intangible aspects of the service offerings allowing access to opportunities. To inform on existing biases in PT planning and design, this study analyses qualitative data to examine the opinions and lived experiences among transport users in Ireland. Findings show that infrastructural design plays a significant role in users’ engagement with the service. Paramount to accessibility are service provisions that cater to both user interactions and those of their dependents. Apprehension to use the service is more so evident in women in comparison to men, particularly while carrying out household duties and caring responsibilities at peak times or dark hours. Furthermore, limitations are apparent with infrastructural service offerings that do not accommodate the physical (dis)ability of users, especially universal design. There are intersecting factors that impinge on accessibility, e.g., safety and security, yet essentially; the infrastructural design is an important influencing parameter to user perceptual conditioning. Additionally, data discloses the need for user intricacies to be factored in transport planning geared towards gender inclusivity, including mobility practices, travel purpose, transit time or location, and system integration.

Keywords: infrastructure design, public transport, accessibility, women, gender

Procedia PDF Downloads 60