Search results for: minimum variance portfolio
372 Implications of Human Cytomegalovirus as a Protective Factor in the Pathogenesis of Breast Cancer
Authors: Marissa Dallara, Amalia Ardeljan, Lexi Frankel, Nadia Obaed, Naureen Rashid, Omar Rashid
Abstract:
Human Cytomegalovirus (HCMV) is a ubiquitous virus that remains latent in approximately 60% of individuals in developed countries. Viral load is kept at a minimum due to a robust immune response that is produced in most individuals who remain asymptomatic. HCMV has been recently implicated in cancer research because it may impose oncomodulatory effects on tumor cells of which it infects, which could have an impact on the progression of cancer. HCMV has been implicated in increased pathogenicity of certain cancers such as gliomas, but in contrast, it can also exhibit anti-tumor activity. HCMV seropositivity has been recorded in tumor cells, but this may also have implications in decreased pathogenesis of certain forms of cancer such as leukemia as well as increased pathogenesis in others. This study aimed to investigate the correlation between cytomegalovirus and the incidence of breast cancer. Methods The data used in this project was extracted from a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to analyze the patients infected versus patients not infection with cytomegalovirus using ICD-10, ICD-9 codes. Permission to utilize the database was given by Holy Cross Health, Fort Lauderdale, for the purpose of academic research. Data analysis was conducted using standard statistical methods. Results The query was analyzed for dates ranging from January 2010 to December 2019, which resulted in 14,309 patients in both the infected and control groups, respectively. The two groups were matched by age range and CCI score. The incidence of breast cancer was 1.642% and 235 patients in the cytomegalovirus group compared to 4.752% and 680 patients in the control group. The difference was statistically significant by a p-value of less than 2.2x 10^-16 with an odds ratio of 0.43 (0.4 to 0.48) with a 95% confidence interval. Investigation into the effects of HCMV treatment modalities, including Valganciclovir, Cidofovir, and Foscarnet, on breast cancer in both groups was conducted, but the numbers were insufficient to yield any statistically significant correlations. Conclusion This study demonstrates a statistically significant correlation between cytomegalovirus and a reduced incidence of breast cancer. If HCMV can exert anti-tumor effects on breast cancer and inhibit growth, it could potentially be used to formulate immunotherapy that targets various types of breast cancer. Further evaluation is warranted to assess the implications of cytomegalovirus in reducing the incidence of breast cancer.Keywords: human cytomegalovirus, breast cancer, immunotherapy, anti-tumor
Procedia PDF Downloads 208371 Effect of Endurance Training on Serum Chemerin Levels and Lipid Profile of Plasma in Obese Women
Authors: A. Moghadasein, M. Ghasemi, S. Fazelifar
Abstract:
Aim: Chemerin is a novel adipokine that play an important role in regulating lipid metabolism and abiogenesis. Chemerin is dependent on autocrine and paracrine signals for the differentiation and maturation of fat cells; it also regulates glucose uptake in fat cells and stimulates lipolysis. It has been reported that in adipocytes, chemerin enhances the insulin-stimulated glucose and causes the phosphorylation of tyrosine in Insulin receptor substrate. According to the studies, Chemerin may increase insulin sensitivity in adipose tissue and is largely associated with Body mass index, triglycerides, and blood pressure in those with normal glucose tolerance. There is limited information available regarding the effect of exercise training on serum chemerin concentrations. The purpose of this study was to investigate the effect of endurance training on serum chemerin levels and lipids of plasma in overweight women. Methodology: This study was a quasi-experimental research with a pre-post test design. After required examination and verification of high pressure by the physician, 22 obese subjects (age: 35.64±5.55 yr, weight: 75.62±9.30 kg, body mass index: 32.4±1.6 kg/m2) were randomly assigned to aerobic training (n= 12) and control (n= 12) groups. Participants completed a questionnaire indicating the lack of sports history during the past six months, the lack of anti-hypertension drugs use, hormone therapy, cardiovascular problems, and complete stoppage of menstrual cycle. Aerobic training was performed 3 times weekly for 8 weeks. Resting levels of chemerin plasma, metabolic parameters were measured prior to and after the intervention. The control group did not participate in any training program. In this study, ethical considerations included the complete description of the objectives to the study participants, ensuring the confidentiality of their information. Kolmogorov-Smirnov and Levin test were used for determining the normal distribution of data and homogeneity of variances, respectively. Analyze of variance with repeated measure were used to investigate the changes in the intra-group and the differences in inter-group of variables. Statistical operations were performed using SPSS 16 and the significance level of the tests was considered at P < 0.05. Results: After an 8 week aerobic training, levels of chemerin plasma were significantly decreased in aerobic trained group when compared with their control groups (p < 0.05).Concurrently, levels of HDL-c were significantly decreased (p < 0.05) whereas, levels of cholesterol, TG and LDL-c, showed no significant changes (p > 0.05). No significant correlations between chemerin levels and weight loss were observed in subjects with overweight women. Conclusion: The present study demonstrated, 8 weeks aerobic training, reduced serum chemerin concentrations in overweight women. Whereas, aerobic training exercise programmers affected the lipid profile response of obese subjects differently. However further research is warranted in order to unravel the molecular mechanism for the range of responses and the role of serum chemerin.Keywords: chemerin, aerobic training, lipid profile, obese women
Procedia PDF Downloads 489370 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System
Authors: Tomilola J. Ajayi
Abstract:
A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo
Procedia PDF Downloads 123369 The Influence of Minority Stress on Depression among Thai Lesbian, Gay, Bisexual, and Transgender Adults
Authors: Priyoth Kittiteerasack, Alana Steffen, Alicia K. Matthews
Abstract:
Depression is a leading cause of the worldwide burden of disability and disease burden. Notably, lesbian, gay, bisexual, and transgender (LGBT) populations are more likely to be a high-risk group for depression compared to their heterosexual and cisgender counterparts. To date, little is known about the rates and predictors of depression among Thai LGBT populations. As such, the purpose of this study was to: 1) measure the prevalence of depression among a diverse sample of Thai LGBT adults and 2) determine the influence of minority stress variables (discrimination, victimization, internalized homophobia, and identity concealment), general stress (stress and loneliness), and coping strategies (problem-focused, avoidance, and seeking social support) on depression outcomes. This study was guided by the Minority Stress Model (MSM). The MSM posits that elevated rates of mental health problems among LGBT populations stem from increased exposures to social stigma due to their membership in a stigmatized minority group. Social stigma, including discrimination and violence, represents unique sources of stress for LGBT individuals and have a direct impact on mental health. This study was conducted as part of a larger descriptive study of mental health among Thai LGBT adults. Standardized measures consistent with the MSM were selected and translated into the Thai language by a panel of LGBT experts using the forward and backward translation technique. The psychometric properties of translated instruments were tested and acceptable (Cronbach’s alpha > .8 and Content Validity Index = 1). Study participants were recruited using convenience and snowball sampling methods. Self-administered survey data were collected via an online survey and via in-person data collection conducted at a leading Thai LGBT organization. Descriptive statistics and multivariate analyses using multiple linear regression models were conducted to analyze study data. The mean age of participants (n = 411) was 29.5 years (S.D. = 7.4). Participants were primarily male (90.5%), homosexual (79.3%), and cisgender (76.6%). The mean score for depression of study participant was 9.46 (SD = 8.43). Forty-three percent of LGBT participants reported clinically significant levels of depression as measured by the Beck Depression Inventory. In multivariate models, the combined influence of demographic, stress, coping, and minority stressors explained 47.2% of the variance in depression scores (F(16,367) = 20.48, p < .001). Minority stressors independently associated with depression included discrimination (β = .43, p < .01) victimization (β = 1.53, p < .05), and identity concealment (β = -.54, p < .05). In addition, stress (β = .81, p < .001), history of a chronic disease (β = 1.20, p < .05), and coping strategies (problem-focused coping β = -1.88, p < .01, seeking social support β = -1.12, p < .05, and avoidance coping β = 2.85, p < .001) predicted depression scores. The study outcomes emphasized that minority stressors uniquely contributed to depression levels among Thai LGBT participants over and above typical non-minority stressors. Study findings have important implications for nursing practice and the development of intervention research.Keywords: depression, LGBT, minority stress, sexual and gender minority, Thailand
Procedia PDF Downloads 127368 Gender and Total Compensation, in an ‘Age’ of Disruption
Authors: Daniel J. Patricio Jiménez
Abstract:
The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.Keywords: diversity, gender gap, human resources, sustainability.
Procedia PDF Downloads 168367 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 82366 Investigation of Mangrove Area Effects on Hydrodynamic Conditions of a Tidal Dominant Strait Near the Strait of Hormuz
Authors: Maryam Hajibaba, Mohsen Soltanpour, Mehrnoosh Abbasian, S. Abbas Haghshenas
Abstract:
This paper aims to evaluate the main role of mangroves forests on the unique hydrodynamic characteristics of the Khuran Strait (KS) in the Persian Gulf. Investigation of hydrodynamic conditions of KS is vital to predict and estimate sedimentation and erosion all over the protected areas north of Qeshm Island. KS (or Tang-e-Khuran) is located between Qeshm Island and the Iranian mother land and has a minimum width of approximately two kilometers. Hydrodynamics of the strait is dominated by strong tidal currents of up to 2 m/s. The bathymetry of the area is dynamic and complicated as 1) strong currents do exist in the area which lead to seemingly sand dune movements in the middle and southern parts of the strait, and 2) existence a vast area with mangrove coverage next to the narrowest part of the strait. This is why ordinary modeling schemes with normal mesh resolutions are not capable for high accuracy estimations of current fields in the KS. A comprehensive set of measurements were carried out with several components, to investigate the hydrodynamics and morpho-dynamics of the study area, including 1) vertical current profiling at six stations, 2) directional wave measurements at four stations, 3) water level measurements at six stations, 4) wind measurements at one station, and 5) sediment grab sampling at 100 locations. Additionally, a set of periodic hydrographic surveys was included in the program. The numerical simulation was carried out by using Delft3D – Flow Module. Model calibration was done by comparing water levels and depth averaged velocity of currents against available observational data. The results clearly indicate that observed data and simulations only fit together if a realistic perspective of the mangrove area is well captured by the model bathymetry data. Generating unstructured grid by using RGFGRID and QUICKIN, the flow model was driven with water level time-series at open boundaries. Adopting the available field data, the key role of mangrove area on the hydrodynamics of the study area can be studied. The results show that including the accurate geometry of the mangrove area and consideration of its sponge-like behavior are the key aspects through which a realistic current field can be simulated in the KS.Keywords: Khuran Strait, Persian Gulf, tide, current, Delft3D
Procedia PDF Downloads 210365 Tackling the Decontamination Challenge: Nanorecycling of Plastic Waste
Authors: Jocelyn Doucet, Jean-Philippe Laviolette, Ali Eslami
Abstract:
The end-of-life management and recycling of polymer wastes remains a key environment issue in on-going efforts to increase resource efficiency and attaining GHG emission reduction targets. Half of all the plastics ever produced were made in the last 13 years, and only about 16% of that plastic waste is collected for recycling, while 25% is incinerated, 40% is landfilled, and 19% is unmanaged and leaks in the environment and waterways. In addition to the plastic collection issue, the UN recently published a report on chemicals in plastics, which adds another layer of challenge when integrating recycled content containing toxic products into new products. To tackle these important issues, innovative solutions are required. Chemical recycling of plastics provides new complementary alternatives to the current recycled plastic market by converting waste material into a high value chemical commodity that can be reintegrated in a variety of applications, making the total market size of the output – virgin-like, high value products - larger than the market size of the input – plastic waste. Access to high-quality feedstock also remains a major obstacle, primarily due to material contamination issues. Pyrowave approaches this challenge with its innovative nano-recycling technology, which purifies polymers at the molecular level, removing undesirable contaminants and restoring the resin to its virgin state without having to depolymerise it. This breakthrough approach expands the range of plastics that can be effectively recycled, including mixed plastics with various contaminants such as lead, inorganic pigments, and flame retardants. The technology allows yields below 100ppm, and purity can be adjusted to an infinitesimal level depending on the customer's specifications. The separation of the polymer and contaminants in Pyrowave's nano-recycling process offers the unique ability to customize the solution on targeted additives and contaminants to be removed based on the difference in molecular size. This precise control enables the attainment of a final polymer purity equivalent to virgin resin. The patented process involves dissolving the contaminated material using a specially formulated solvent, purifying the mixture at the molecular level, and subsequently extracting the solvent to yield a purified polymer resin that can directly be reintegrated in new products without further treatment. Notably, this technology offers simplicity, effectiveness, and flexibility while minimizing environmental impact and preserving valuable resources in the manufacturing circuit. Pyrowave has successfully applied this nano-recycling technology to decontaminate polymers and supply purified, high-quality recycled plastics to critical industries, including food-contact compliance. The technology is low-carbon, electrified, and provides 100% traceable resins with properties identical to those of virgin resins. Additionally, the issue of low recycling rates and the limited market for traditionally hard-to-recycle plastic waste has fueled the need for new complementary alternatives. Chemical recycling, such as Pyrowave's microwave depolymerization, presents a sustainable and efficient solution by converting plastic waste into high-value commodities. By employing microwave catalytic depolymerization, Pyrowave enables a truly circular economy of plastics, particularly in treating polystyrene waste to produce virgin-like styrene monomers. This revolutionary approach boasts low energy consumption, high yields, and a reduced carbon footprint. Pyrowave offers a portfolio of sustainable, low-carbon, electric solutions to give plastic waste a second life and paves the way to the new circular economy of plastics. Here, particularly for polystyrene, we show that styrene monomer yields from Pyrowave’s polystyrene microwave depolymerization reactor is 2,2 to 1,5 times higher than that of the thermal conventional pyrolysis. In addition, we provide a detailed understanding of the microwave assisted depolymerization via analyzing the effects of microwave power, pyrolysis time, microwave receptor and temperature on the styrene product yields. Furthermore, we investigate life cycle environmental impact assessment of microwave assisted pyrolysis of polystyrene in commercial-scale production. Finally, it is worth pointing out that Pyrowave is able to treat several tons of polystyrene to produce virgin styrene monomers and manage waste/contaminated polymeric materials as well in a truly circular economy.Keywords: nanorecycling, nanomaterials, plastic recycling, depolymerization
Procedia PDF Downloads 66364 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India
Authors: Shubhangi Roy
Abstract:
Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms
Procedia PDF Downloads 165363 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength
Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph
Abstract:
Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage
Procedia PDF Downloads 236362 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science
Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji
Abstract:
In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical
Procedia PDF Downloads 463361 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses
Authors: Ayon Mukherjee
Abstract:
Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability
Procedia PDF Downloads 165360 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa
Authors: Yohana Fessehazion
Abstract:
Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth
Procedia PDF Downloads 222359 Intraspecific Biochemical Diversity of Dalmatian Pyrethrum Across the Different Bioclimatic Regions of Its Natural Distribution Area
Authors: Martina Grdiša, Filip Varga, Nina Jeran, Ante Turudić, Zlatko Šatović
Abstract:
Dalmatian pyrethrum (Tanacetum cinerariifolium (Trevir.) Sch. Bip.) is a plant species that occurs naturally in the eastern Mediterranean. It is of immense economic importance as it synthesizes and accumulates the phytochemical compound pyrethrin. Pyrethrin consists of several monoterpene esters (pyrethrin I and II, cinerin I and II and jasmolin I and II), which have insecticidal and repellent activity through their synergistic action. In this study, 15 natural Dalmatian pyrethrum populations were sampled along their natural range in Croatia, Bosnia and Herzegovina and Montenegro to characterize and compare their pyrethrin profiles and to define the bioclimatic factors associated with the accumulation of each pyrethrin compound. Pyrethrins were extracted from the dried flower heads of Dalmatian pyrethrum using ultrasound-assisted extraction and the amount of each compound was quantified using high-performance liquid chromatography coupled to DAD-UV /VIS. The biochemical data were subjected to analysis of variance, correlation analysis and multivariate analysis. Quantitative variability within and among populations was found, with population P15 Vranjske Njive, Podgorica having the significantly highest pyrethrin I content (66.47% of total pyrethrin content), while the highest levels of total pyrethrin were found in P14 Budva (1.27% of dry flower weight; DW), followed by P08 Korčula (1.15% DW). Based on the environmental conditions at the sampling sites of the populations, five bioclimatic groups were distinguished, referred to as A, B, C, D, and E, each with rare chemical profile. The first group (A) consisted of the northern Adriatic population P01 Vrbnik, Krk and the population P06 Sevid - the coastal population of the central Adriatic, and generally differed significantly from the other bioclimatic groups by higher average jasmolin II values (2.13% of total pyrethrin). The second group (B) consisted of two central Adriatic island populations (P02 Telašćica, Dugi otok and P03 Žman, Dugi otok), while the remaining central Adriatic island populations were grouped in bioclimatic group C, which was characterized by the significantly highest average pyrethrin II (48.52% of total pyrethrin) and cinerin II (5.31% DW) content. The South Adriatic inland populations P10 Srđ and P11 Trebinje (Bosnia and Herzegovina), and the populations from Montenegro (P12 Grahovo, P13 Lovćen, P14 Budva and P15 Vranjske Njive, Podgorica) formed bioclimatic group E. This bioclimatic group was characterized by the highest average values for pyrethrin I (53.07 % of total pyrethrin), total pyrethrin content (1.06 % DW) and the ratio of pyrethrin I and II (1.85). Slightly lower values (although not significant) for the latter traits were detected in bioclimatic group D (southern Adriatic island populations P07 Vis, P08 Korčula and P09 Mljet). A weak but significant correlation was found between the levels of some pyrethrin compounds and bioclimatic variables (e.g., BIO03 Isothermality and BIO04 Temperature Seasonality), which explains part of the variability observed in the populations studied. This suggests the interconnection between bioclimatic variables and biochemical profiles either through the selection of adapted genotypes or through the ability of species to alter the expression of biochemical traits in response to environmental changes.Keywords: biopesticides, biochemical variability, pyrethrin, Tanacetum cinerariifolium
Procedia PDF Downloads 155358 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 215357 The Temporal Pattern of Bumble Bees in Plant Visiting
Authors: Zahra Shakoori, Farid Salmanpour
Abstract:
Pollination services are a vital service for the ecosystem to maintain environmental stability. The decline of pollinators can disrupt the ecological balance by affecting components of biodiversity. Bumble bees are crucial pollinators, playing a vital role in maintaining plant diversity. This study investigated the temporal patterns of their visitation to flowers in Kiasar National Park, Iran. Observations were conducted in Jun 2024, totaling 442 person-minutes of observation. Five species of bumble bees were identified. The study revealed that they consistently visited an average of 12-15 flowers per minute, regardless of species. The findings highlight the importance of protecting natural habitats, where their populations are thriving in the absence of human-induced stressors. This study was conducted in Kiasar National Park, located in the southeast of Mazandaran, northern Iran. The surveyed area, at an altitude of 1800-2200 meters, includes both forest and pasture. Bumble bee surveys were carried out on sunny days from June 2024, starting at dawn and ending at sunset. To avoid double-counting, we systematically searched for foraging habitats on low-sloping ridges with high mud density, frequently moving between patches. We recorded bumble bee visits to flowers and plant species per minute using direct observation, a stopwatch, and a pre-prepared form. We used statistical analysis of variance (ANOVA) with a confidence level of 95% to examine potential differences in foraging rates across different bumble bee species, flowers, plant bases, and plant species visited. Bumble bee identification relied on morphological indicators. A total of 442 person-minutes of bumble bee observations were recorded. Five species of bumble bees (Bombus fragrans, Bombus haematurus, Bombus lucorum, Bombus melanurus, Bombus terrestris) were identified during the study. The results of this study showed that the visits of bumble bees to flower sources were not different from each other. In general, bumble bees visit an average of 12-15 flowers every 60 seconds. In addition, at the same time they visit between 3-5 plant bases. Finally, they visit an average of 1 to 3 plant species per minute. While many taxa contribute to pollination, insects—especially bees—are crucial for maintaining plant diversity and ecosystem functions. As plant diversity increases, the stopping rate of pollinating insects rises, which reduces their foraging activity. Bumble bees, therefore, stop more frequently in natural areas than in agricultural fields due to higher plant diversity. Our findings emphasize the need to protect natural habitats like Kiasar National Park, where bumble bees thrive without human-induced stressors like pesticides, livestock grazing, and pollution. With bumble bee populations declining globally, further research is essential to understand their behavior in different environments and develop effective conservation strategies to protect them.Keywords: bumble bees, pollination, pollinator, plant diversity, Iran
Procedia PDF Downloads 28356 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria
Authors: Tomola Obamuyi
Abstract:
The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression
Procedia PDF Downloads 120355 Development of Special Education in Moldova: Paradoxes of Inclusion
Authors: Liya Kalinnikova Magnusson
Abstract:
The present and ongoing research investigation are focusing on special educational origins in Moldova for children with disabilities and its development towards inclusion. The research is coordinated with related research on inclusion in Ukraine and other countries. The research interest in these issues in Moldova is caused by several reasons. The first one is based upon one of the intensive processes of deconstruction of special education institutions in Moldova since 1989. A large number of children with disabilities have been dropping out of these institutions: from 11400 students in 1989 to 5800 students in 1996, corresponding to 1% of all school-age Moldovan learners. Despite the fact that a huge number of students was integrated into regular schools and the dynamics of this data across the country was uneven (the opposite, the dynamics of exclusion was raised in Trans-Dniester on the border of Moldova), the volume of the change was evident and traditional special educational provision was under stable decline. The second reason is tied to transitional challenges, which Moldova met under the force to economic liberalisation that led the country to poverty. Deinstitutionalization of the entire state system took place in the situation of economic polarization of the society. The level of social benefits was dramatically diminished, increasing inequality. The most vulnerable from the comprehensive income consideration were families with many children, children with disabilities, children with health problems, etc.: each third child belonged to the poorest population. In 2000-2001: 87,4% of all families with children had incomes below the minimum wage. The research question raised based upon these considerations has been addressed to the investigation of particular patterns of the origins of special education and its development towards inclusion in Moldova from 1980 until the present date: what is the pattern of special education origins and what are particular arrangements of special education development towards inclusion against inequality? This is a qualitative study, with relevant peer review resources connected to the research question and national documents of educational reforms towards inclusion retrospectively and contemporary, analysed by a content analysis approach. This study utilises long term statistics completed by the respective international agencies as a result of regular monitoring of the implementation of educational reforms. The main findings were composed in three big themes: adoption of the Soviet pattern of special education, ‘endemic stress’ of breaking the pattern, and ‘paradoxes of resolution’.Keywords: special education, statistics, educational reforms, inclusion, children with disabilities, content analysis
Procedia PDF Downloads 168354 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation
Authors: Afshin Kadri
Abstract:
Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter
Procedia PDF Downloads 76353 Sustainable Treatment of Vegetable Oil Industry Wastewaters by Xanthomonas campestris
Authors: Bojana Ž. Bajić, Siniša N. Dodić, Vladimir S. Puškaš, Jelena M. Dodić
Abstract:
Increasing industrialization as a response to the demands of the consumer society greatly exploits resources and generates large amounts of waste effluents in addition to the desired product. This means it is a priority to implement technologies with the maximum utilization of raw materials and energy, minimum generation of waste effluents and/or their recycling (secondary use). Considering the process conditions and the nature of the raw materials used by the vegetable oil industry, its wastewaters can be used as substrates for the biotechnological production which requires large amounts of water. This way the waste effluents of one branch of industry become raw materials for another branch which produces a new product while reducing wastewater pollution and thereby reducing negative environmental impacts. Vegetable oil production generates wastewaters during the process of rinsing oils and fats which contain mainly fatty acid pollutants. The vegetable oil industry generates large amounts of waste effluents, especially in the processes of degumming, deacidification, deodorization and neutralization. Wastewaters from the vegetable oil industry are generated during the whole year in significant amounts, based on the capacity of the vegetable oil production. There are no known alternative applications for these wastewaters as raw materials for the production of marketable products. Since the literature has no data on the potential negative impact of fatty acids on the metabolism of the bacterium Xanthomonas campestris, these wastewaters were considered as potential raw materials for the biotechnological production of xanthan. In this research, vegetable oil industry wastewaters were used as the basis for the cultivation media for xanthan production with Xanthomonas campestris ATCC 13951. Examining the process of biosynthesis of xanthan on vegetable oil industry wastewaters as the basis for the cultivation media was performed to obtain insight into the possibility of its use in the aforementioned biotechnological process. Additionally, it was important to experimentally determine the absence of substances that have an inhibitory effect on the metabolism of the production microorganism. Xanthan content, rheological parameters of the cultivation media, carbon conversion into xanthan and conversions of the most significant nutrients for biosynthesis (carbon, nitrogen and phosphorus sources) were determined as indicators of the success of biosynthesis. The obtained results show that biotechnological production of the biopolymer xanthan by bacterium Xanthomonas campestris on vegetable oil industry wastewaters based cultivation media simultaneously provides preservation of the environment and economic benefits which is a sustainable solution to the problem of wastewater treatment.Keywords: biotechnology, sustainable bioprocess, vegetable oil industry wastewaters, Xanthomonas campestris
Procedia PDF Downloads 150352 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years
Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding
Procedia PDF Downloads 175351 Teaching Accounting through Critical Accounting Research: The Origin and Its Relevance to the South African Curriculum
Authors: Rosy Makeresemese Qhosola
Abstract:
South Africa has maintained the effort to uphold its guiding principles in terms of its constitution. The constitution upholds principles such as equity, social justice, peace, freedom and hope, to mention but a few. So, such principles are made to form the basis for any legislation and policies that are in place to guide all fields/departments of government. Education is one of those departments or fields and is expected to abide by such principles as outlined in their policies. Therefore, as expected education policies and legislation outline their intentions to ensure the development of students’ clear critical thinking capacity as well as their creative capacities by creating learning contexts and opportunities that accommodate the effective teaching and learning strategies, that are learner centered and are compatible with the prescripts of a democratic constitution of the country. The paper aims at exploring and analyzing the progress of conventional accounting in terms of its adherence to the effective use of principles of good teaching, as per policy expectations in South Africa. The progress is traced by comparing conventional accounting to Critical Accounting Research (CAR), where the history of accounting as intended in the curriculum of SA and CAR are highlighted. Critical Accounting Research framework is used as a lens and mode of teaching in this paper, since it can create a space for the learning of accounting that is optimal marked by the use of more learner-centred methods of teaching. The Curriculum of South Africa also emphasises the use of more learner-centred methods of teaching that encourage an active and critical approach to learning, rather than rote and uncritical learning of given truths. The study seeks to maintain that conventional accounting is in contrast with principles of good teaching as per South African policy expectations. The paper further maintains that, the possible move beyond it and the adherence to the effective use of good teaching, could be when CAR forms the basis of teaching. Data is generated through Participatory Action Research where the meetings, dialogues and discussions with the focused groups are conducted, which consists of lecturers, students, subject heads, coordinators and NGO’s as well as departmental officials. The results are analysed through Critical Discourse Analysis since it allows for the use of text by participants. The study concludes that any teacher who aspires to achieve in the teaching and learning of accounting should first meet the minimum requirements as stated in the NQF level 4, which forms the basic principles of good teaching and are in line with Critical Accounting Research.Keywords: critical accounting research, critical discourse analysis, participatory action research, principles of good teaching
Procedia PDF Downloads 309350 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 272349 Rotational and Linear Accelerations of an Anthropometric Test Dummy Head from Taekwondo Kicks among Amateur Practitioners
Authors: Gabriel P. Fife, Saeyong Lee, David M. O'Sullivan
Abstract:
Introduction: Although investigations into injury characteristics are represented well in the literature, few have investigated the biomechanical characteristics associated with head impacts in Taekwondo. Therefore, the purpose of this study was to identify the kinematic characteristics of head impacts due to taekwondo kicks among non-elite practitioners. Participants: Male participants (n= 11, 175 + 5.3 cm, 71 + 8.3 kg) with 7.5 + 3.6 years of taekwondo training volunteered for this study. Methods: Participants were asked to perform five repetitions of each technique (i.e., turning kick, spinning hook kick, spinning back kick, front axe kick, and clench axe kick) aimed at the Hybrid III head with their dominant kicking leg. All participants wore a protective foot pad (thickness = 12 mm) that is commonly used in competition and training. To simulate head impact in taekwondo, the target consisted of a Hybrid III 50th Percentile Crash Test Dummy (Hybrid III) head (mass = 5.1 kg) and neck (fitted with taekwondo headgear) secured to an aluminum support frame and positioned to each athlete’s standing height. The Hybrid III head form was instrumented with a 500 g tri-axial accelerometer (PCB Piezotronics) mounted to the head center of gravity to obtain resultant linear accelerations (RLA). Rotational accelerations were collected using three angular rate sensors mounted orthogonally to each other (Diversified Technical Systems ARS-12 K Angular Rate Sensor). The accelerometers were interfaced via a 3-channel, battery-powered integrated circuit piezoelectric sensor signal conditioner (PCB Piezotronics) and connected to a desktop computer for analysis. Acceleration data were captured using LABVIEW Signal Express and processed in accordance with SAE J211-1 channel frequency class 1000. Head injury criteria values (HIC) were calculated using the VSRSoftware. A one-way analysis of variance was used to determine differences between kicks, while the Tukey HSD test was employed for pairwise comparisons. The level of significance was set to an effect size of 0.20. All statistical analyses were done using R 3.1.0. Results: A statistically significant difference was observed in RLA (p = 0.00075); however, these differences were not clinically meaningful (η² = 0.04, 95% CI: -0.94 to 1.03). No differences were identified with ROTA (p = 0.734, η² = 0.0004, 95% CI: -0.98 to 0.98). A statistically significant difference (p < 0.001) between kicks in HIC was observed, with a medium effect (η2= 0.08, 95% CI: -0.98 to 1.07). However, the confidence interval of this difference indicates uncertainty. Tukey HSD test identified differences (p < 0.001) between kicking techniques in RLA and HIC. Conclusion: This study observed head impact levels that were comparable to previous studies of similar objectives and methodology. These data are important as impact measures from this study may be more representative of impact levels experienced by non-elite competitors. Although the clench axe kick elicited a lower RLA, the ROTA of this technique was higher than levels from other techniques (although not large differences in reference to effect sizes). As the axe kick has been reported to cause severe head injury, future studies may consider further study of this kick important.Keywords: Taekwondo, head injury, biomechanics, kicking
Procedia PDF Downloads 26348 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving
Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem
Abstract:
This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.Keywords: energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability
Procedia PDF Downloads 300347 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 294346 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops
Authors: Eric Bates
Abstract:
Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.Keywords: first year experience, student retention, group work, presentation workshops
Procedia PDF Downloads 228345 Synthesis, Physicochemical Characterization and Study of the Antimicrobial Activity of Chlorobutanol
Authors: N. Hadhoum, B. Guerfi, T. M. Sider, Z. Yassa, T. Djerboua, M. Boursouti, M. Mamou, F. Z. Hadjadj Aoul, L. R. Mekacher
Abstract:
Introduction and objectives: Chlorobutanol is a raw material, mainly used as an antiseptic and antimicrobial preservative in injectable and ophthalmic preparations. The main objective of our study was the synthesis and evaluation of the antimicrobial activity of chlorobutanol hemihydrates. Material and methods: Chlorobutanol was synthesized according to the nucleophilic addition reaction of chloroform to acetone, identified by an infrared absorption using Spectrum One FTIR spectrometer, melting point, Scanning electron microscopy and colorimetric reactions. The dosage of carvedilol active substance was carried out by assaying the degradation products of chlorobutanol in a basic solution. The chlorobutanol obtained was subjected to bacteriological tests in order to study its antimicrobial activity. The antibacterial activity was evaluated against strains such as Escherichia coli (ATCC 25 922), Staphylococcus aureus (ATCC 25 923) and Pseudomonas aeroginosa (ATCC = American type culture collection). The antifungal activity was evaluated against human pathogenic fungal strains, such as Candida albicans and Aspergillus niger provided by the parasitology laboratory of the Hospital of Tizi-Ouzou, Algeria. Results and discussion: Chlorobutanol was obtained in an acceptable yield. The characterization tests of the product obtained showed a white and crystalline appearance (confirmed by scanning electron microscopy), solubilities (in water, ethanol and glycerol), and a melting temperature in accordance with the requirements of the European pharmacopoeia. The colorimetric reactions were directed towards the presence of a trihalogenated carbon and an alcohol function. The spectral identification (IR) showed the presence of characteristic chlorobutanol peaks and confirmed the structure of the latter. The microbiological study revealed an antimicrobial effect on all strains tested (Sataphylococcus aureus (MIC = 1250 µg/ml), E. coli (MIC = 1250 µg/ml), Pseudomonas aeroginosa (MIC = 1250 µg/ml), Candida albicans (MIC =2500 µg/ml), Aspergillus niger (MIC =2500 µg/ml)) with MIC values close to literature data. Conclusion: Thus, on the whole, the synthesized chlorobutanol satisfied the requirements of the European Pharmacopoeia, and possesses antibacterial and antifungal activity; nevertheless, it is necessary to insist on the purification step of the product in order to eliminate the maximum impurities.Keywords: antimicrobial agent, bacterial and fungal strains, chlorobutanol, MIC, minimum inhibitory concentration
Procedia PDF Downloads 168344 The Application of Animal Welfare Certification System for Farm Animal in South Korea
Authors: Ahlyum Mun, Ji-Young Moon, Moon-Seok Yoon, Dong-Jin Baek, Doo-Seok Seo, Oun-Kyong Moon
Abstract:
There is a growing public concern over the standards of farm animal welfare, with higher standards of food safety. In addition, the recent low incidence of Avian Influenza in laying hens among certificated farms is receiving attention. In this study, we introduce animal welfare systems covering the rearing, transport and slaughter of farm animals in South Korea. The concepts of animal welfare farm certification are based on ensuring the five freedoms of animal. The animal welfare is also achieved by observing the condition of environment including shelter and resting area, feeding and water and the care for the animal health. The certification of farm animal welfare is handled by the Animal Protection & Welfare Division of Animal and Plant Quarantine Agency (APQA). Following the full amendment of Animal Protection Law in 2011, animal welfare farm certification program has been implemented since 2012. The certification system has expanded to cover laying hen, swine, broiler, beef cattle and dairy cow, goat and duck farms. Livestock farmers who want to be certified must apply for certification at the APQA. Upon receipt of the application, the APQA notifies the applicant of the detailed schedule of the on-site examination after reviewing the document and conducts the on-site inspection according to the evaluation criteria of the welfare standard. If the on-site audit results meet the certification criteria, APQA issues a certificate. The production process of certified farms is inspected at least once a year for follow-up management. As of 2017, a total of 145 farms have been certified (95 laying hen farms, 12 swine farms, 30 broiler farms and 8 dairy cow farms). In addition, animal welfare transportation vehicles and slaughterhouses have been designated since 2013 and currently 6 slaughterhouses have been certified. Animal Protection Law has been amended so that animal welfare certification marks can be affixed only to livestock products produced by animal welfare farms, transported through animal welfare vehicles and slaughtered at animal welfare slaughterhouses. The whole process including rearing–transportation- slaughtering completes the farm animal welfare system. APQA established its second 5-year animal welfare plan (2014-2019) that includes setting a minimum standard of animal welfare applicable to all livestock farms, transportation vehicles and slaughterhouses. In accordance with this plan, we will promote the farm animal welfare policy in order to truly advance the Korean livestock industry.Keywords: animal welfare, farm animal, certification system, South Korea
Procedia PDF Downloads 399343 Ensuring Continuity in Subcutaneous Depot Medroxy Progesterone Acetate (DMPA-SC) Contraception Service Provision Using Effective Commodity Management Practices
Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu
Abstract:
Background: The Delivering Innovations in Selfcare (DISC) project aims to increase access to self-care options for women of reproductive age, starting with self-inject subcutaneous depot medroxyprogesterone acetate (DMPA-SC) contraception services. However, the project has faced challenges in ensuring the continuous availability of the commodity in health facilities. Although most states in the country rely on the federal ministry of Health for supplies, some are gradually funding the procurement of Family Planning (FP) commodities. This attempt is, however, often accompanied by procurement delays and purchases inadequate to meet demand. This dilemma was further exacerbated by the commencement of demand generation activities by the project in supported states which geometrically increased commodity utilization rates and resulted in receding stock and occasional service disruptions. Strategies: The project deployed various strategies were implemented to ensure the continuous availability of commodities. These include facilitating inter-facility transfer, monthly tracking of commodity utilization, and alerting relevant authorities when stock levels reach a minimum. And supporting state-level procurement of DMPA-SC commodities through catalytic interventions. Results: Effective monitoring of commodity inventory at the facility level and strategic engagement with federal and state-level logistics units have proven successful in mitigating stock-out of commodities. It has helped secure up to 13,000 units of DMPA-SC commodities from federal logistics units and enabled state units to prioritize supported sites. This has ensured the continuity of DMPA-SC services and an increasing trend in the practice of self-injection. Conclusion: A functional supply chain is crucial to achieving commodity security, and without it, health programs cannot succeed. Stakeholder engagement, stock management and catalytic interventions have provided both short- and long-term measures to mitigate stock-outs and ensured a consistent supply of commodities to clients.Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, commodities, stock-out
Procedia PDF Downloads 89