Search results for: drawdown time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18155

Search results for: drawdown time

1295 Gender Specific Differences in Clinical Outcomes of Knee Osteoarthritis Treated with Micro-Fragmented Adipose Tissue

Authors: Tiffanie-Marie Borg, Yasmin Zeinolabediny, Nima Heidari, Ali Noorani, Mark Slevin, Angel Cullen, Stefano Olgiati, Alberto Zerbi, Alessandro Danovi, Adrian Wilson

Abstract:

Knee Osteoarthritis (OA) is a critical cause of disability globally. In recent years, there has been growing interest in non-invasive treatments, such as intra-articular injection of micro-fragmented fat (MFAT), showing great potential in treating OA. Mesenchymal stem cells (MSCs), originating from pericytes of micro-vessels in MFAT, can differentiate into mesenchymal lineage cells such as cartilage, osteocytes, adipocytes, and osteoblasts. Secretion of growth factor and cytokines from MSCs have the capability to inhibit T cell growth, reduced pain and inflammation, and create a micro-environment that through paracrine signaling, can promote joint repair and cartilage regeneration. Here we have shown, for the first time, data supporting the hypothesis that women respond better in terms of improvements in pain and function to MFAT injection compared to men. Historically, women have been underrepresented in studies, and studies with both sexes regularly fail to analyse the results by sex. To mitigate this bias and quantify it, we describe a technique using reproducible statistical analysis and replicable results with Open Access statistical software R to calculate the magnitude of this difference. Genetic, hormonal, environmental, and age factors play a role in our observed difference between the sexes. This observational, intention-to-treat study included the complete sample of 456 patients who agreed to be scored for pain (visual analogue scale (VAS)) and function (Oxford knee score (OKS)) at baseline regardless of subsequent changes to adherence or status during follow-up. We report that a significantly larger number of women responded to treatment than men: [90% vs. 60% change in VAS scores with 87% vs. 65% change in OKS scores, respectively]. Women overall had a stronger positive response to treatment with reduced pain and improved mobility and function. Pre-injection, our cohort of women were in more pain with worse joint function which is quite common to see in orthopaedics. However, during the 2-year follow-up, they consistently maintained a lower incidence of discomfort with superior joint function. This data clearly identifies a clear need for further studies to identify the cell and molecular biological and other basis for these differences and be able to utilize this information for stratification in order to improve outcome for both women and men.

Keywords: gender differences, micro-fragmented adipose tissue, knee osteoarthritis, stem cells

Procedia PDF Downloads 180
1294 Survival Analysis after a First Ischaemic Stroke Event: A Case-Control Study in the Adult Population of England.

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Stroke is associated with a significant risk of morbidity and mortality. There is scarcity of research on the long-term survival after first-ever ischaemic stroke (IS) events in England with regards to effects of different medical therapies and comorbidities. The objective of this study was to model the all-cause mortality after an IS diagnosis in the adult population of England. Using a retrospective case-control design, we extracted the electronic medical records of patients born prior to or in year 1960 in England with a first-ever ischaemic stroke diagnosis from January 1986 to January 2017 within the Health and Improvement Network (THIN) database. Participants with a history of ischaemic stroke were matched to 3 controls by sex and age at diagnosis and general practice. The primary outcome was the all-cause mortality. The hazards of the all-cause mortality were estimated using a Weibull-Cox survival model which included both scale and shape effects and a shared random effect of general practice. The model included sex, birth cohort, socio-economic status, comorbidities and medical therapies. 20,250 patients with a history of IS (cases) and 55,519 controls were followed up to 30 years. From 2008 to 2015, the one-year all-cause mortality for the IS patients declined with an absolute change of -0.5%. Preventive treatments to cases increased considerably over time. These included prescriptions of statins and antihypertensives. However, prescriptions for antiplatelet drugs decreased in the routine general practice since 2010. The survival model revealed a survival benefit of antiplatelet treatment to stroke survivors with hazard ratio (HR) of 0.92 (0.90 – 0.94). IS diagnosis had significant interactions with gender and age at entry and hypertension diagnosis. IS diagnosis was associated with high risk of all-cause mortality with HR= 3.39 (3.05-3.72) for cases compared to controls. Hypertension was associated with poor survival with HR = 4.79 (4.49 - 5.09) for hypertensive cases relative to non-hypertensive controls, though the detrimental effect of hypertension has not reached significance for hypertensive controls, HR = 1.19(0.82-1.56). This study of English primary care data showed that between 2008 and 2015, the rates of prescriptions of stroke preventive treatments increased, and a short-term all-cause mortality after IS stroke declined. However, stroke resulted in poor long-term survival. Hypertension, a modifiable risk factor, was found to be associated with poor survival outcomes in IS patients. Antiplatelet drugs were found to be protective to survival. Better efforts are required to reduce the burden of stroke through health service development and primary prevention.

Keywords: general practice, hazard ratio, health improvement network (THIN), ischaemic stroke, multiple imputation, Weibull-Cox model.

Procedia PDF Downloads 184
1293 Reflective Portfolio to Bridge the Gap in Clinical Training

Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab

Abstract:

Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.

Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education

Procedia PDF Downloads 84
1292 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.

Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime

Procedia PDF Downloads 150
1291 Examining the Critical Factors for Success and Failure of Common Ticketing Systems

Authors: Tam Viet Hoang

Abstract:

With a plethora of new mobility services and payment systems found in our cities and across modern public transportation systems, several cities globally have turned to common ticketing systems to help navigate this complexity. Helping to create time and space-differentiated fare structures and tariff schemes, common ticketing systems can optimize transport utilization rates, achieve cost efficiencies, and provide key incentives to specific target groups. However, not all cities and transportation systems have enjoyed a smooth journey towards the adoption, roll-out, and servicing of common ticketing systems, with both the experiences of success and failure being attributed to a wide variety of critical factors. Using case study research as a methodology and cities as the main unit of analysis, this research will seek to address the fundamental question of “what are the critical factors for the success and failure of common ticketing systems?” Using rail/train systems as the entry point for this study will start by providing a background to the evolution of transport ticketing and justify the improvements in operational efficiency that can be achieved through common ticketing systems. Examining the socio-economic benefits of common ticketing, the research will also help to articulate the value derived for different key identified stakeholder groups. By reviewing case studies of the implementation of common ticketing systems in different cities, the research will explore lessons learned from cities with the aim to elicit factors to ensure seamless connectivity integrated e-ticketing platforms. In an increasingly digital age and where cities are now coming online, this paper seeks to unpack these critical factors, undertaking case study research drawing from literature and lived experiences. Offering us a better understanding of the enabling environment and ideal mixture of ingredients to facilitate the successful roll-out of a common ticketing system, interviews will be conducted with transport operators from several selected cities to better appreciate the challenges and strategies employed to overcome those challenges in relation to common ticketing systems. Meanwhile, as we begin to see the introduction of new mobile applications and user interfaces to facilitate ticketing and payment as part of the transport journey, we take stock of numerous policy challenges ahead and implications on city-wide and system-wide urban planning. It is hoped that this study will help to identify the critical factors for the success and failure of common ticketing systems for cities set to embark on their implementation while serving to fine-tune processes in those cities where common ticketing systems are already in place. Outcomes from the study will help to facilitate an improved understanding of common pitfalls and essential milestones towards the roll-out of a common ticketing system for railway systems, especially for emerging countries where mass rapid transit transport systems are being considered or in the process of construction.

Keywords: common ticketing, public transport, urban strategies, Bangkok, Fukuoka, Sydney

Procedia PDF Downloads 87
1290 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 258
1289 Understanding Face-to-Face Household Gardens’ Profitability and Local Economic Opportunity Pathways

Authors: Annika Freudenberger, Sin Sokhong

Abstract:

In just a few years, the Face-to-Face Victory Gardens Project (F2F) in Cambodia has developed a high-impact project that has provided immediate and tangible benefits to local families. This has been accomplished with a relatively hands-off approach that relies on households’ own motivation and personal investments of time and resources -which is both unique and impressive in the landscape of NGO and government initiatives in the area. Households have been growing food both for their own consumption and to sell or exchange. Not all targeted beneficiaries are equally motivated and maximizing their involvement, but there is a clear subset of households -particularly those who serve as facilitators- whose circumstances have been transformed as a result of F2F. A number of household factors and contextual economic factors affect families’ income generation opportunities. All the households we spoke with became involved with F2F with the goal of selling some proportion of their produce (i.e., not exclusively for their own consumption). For some, this income is marginal and supplemental to their core household income; for others, it is substantial and transformative. Some engage directly with customers/buyers in their immediate community, while others sell in larger nearby markets, and others link up with intermediary vendors. All struggle, to a certain extent, to compete in a local economy flooded with cheap produce imported from large-scale growers in neighboring provinces, Thailand, and Vietnam, although households who grow and sell herbs and greens popular in Khmer cuisine have found a stronger local market. Some are content with the scale of their garden, the income they make, and the current level of effort required to maintain it; others would like to expand but are faced with land constraints and water management challenges. Households making a substantial income from selling their products have achieved success in different ways, making it difficult to pinpoint a clear “model” for replication. Within our small sample size of interviewees, it seems as though the families with a clear passion for their gardens and high motivation to work hard to bring their products to market have succeeded in doing so. Khmer greens and herbs have been the most successful; they are not high-value crops, but they are fairly easy to grow, and there is a constant demand. These crops are also not imported as much, so prices are more stable than those of crops such as long beans. Although we talked to a limited number of individuals, it also appears as though successful families either restricted their crops to those that would grow well in drought or flood conditions (depending on which they are affected by most); or benefit already from water management infrastructure such as water tanks which helps them diversify their crops and helps them build their resilience.

Keywords: food security, Victory Gardens, nutrition, Cambodia

Procedia PDF Downloads 55
1288 Relaxor Ferroelectric Lead-Free Na₀.₅₂K₀.₄₄Li₀.₀₄Nb₀.₈₄Ta₀.₁₀Sb₀.₀₆O₃ Ceramic: Giant Electromechanical Response with Intrinsic Polarization and Resistive Leakage Analyses

Authors: Abid Hussain, Binay Kumar

Abstract:

Environment-friendly lead-free Na₀.₅₂K₀.₄₄Li₀.₀₄Nb₀.₈₄Ta₀.₁₀Sb₀.₀₆O₃ (NKLNTS) ceramic was synthesized by solid-state reaction method in search of a potential candidate to replace lead-based ceramics such as PbZrO₃-PbTiO₃ (PZT), Pb(Mg₁/₃Nb₂/₃)O₃-PbTiO₃ (PMN-PT) etc., for various applications. The ceramic was calcined at temperature 850 ᵒC and sintered at 1090 ᵒC. The powder X-Ray Diffraction (XRD) pattern revealed the formation of pure perovskite phase having tetragonal symmetry with space group P4mm of the synthesized ceramic. The surface morphology of the ceramic was studied using Field Emission Scanning Electron Microscopy (FESEM) technique. The well-defined grains with homogeneous microstructure were observed. The average grain size was found to be ~ 0.6 µm. A very large value of piezoelectric charge coefficient (d₃₃ ~ 754 pm/V) was obtained for the synthesized ceramic which indicated its potential for use in transducers and actuators. In dielectric measurements, a high value of ferroelectric to paraelectric phase transition temperature (Tm~305 ᵒC), a high value of maximum dielectric permittivity ~ 2110 (at 1 kHz) and a very small value of dielectric loss ( < 0.6) were obtained which suggested the utility of NKLNTS ceramic in high-temperature ferroelectric devices. Also, the degree of diffuseness (γ) was found to be 1.61 which confirmed a relaxor ferroelectric behavior in NKLNTS ceramic. P-E hysteresis loop was traced and the value of spontaneous polarization was found to be ~11μC/cm² at room temperature. The pyroelectric coefficient was obtained to be very high (p ∼ 1870 μCm⁻² ᵒC⁻¹) for the present case indicating its applicability in pyroelectric detector applications including fire and burglar alarms, infrared imaging, etc. NKLNTS ceramic showed fatigue free behavior over 107 switching cycles. Remanent hysteresis task was performed to determine the true-remanent (or intrinsic) polarization of NKLNTS ceramic by eliminating non-switchable components which showed that a major portion (83.10 %) of the remanent polarization (Pr) is switchable in the sample which makes NKLNTS ceramic a suitable material for memory switching devices applications. Time-Dependent Compensated (TDC) hysteresis task was carried out which revealed resistive leakage free nature of the ceramic. The performance of NKLNTS ceramic was found to be superior to many lead based piezoceramics and hence can effectively replace them for use in piezoelectric, pyroelectric and long duration ferroelectric applications.

Keywords: dielectric properties, ferroelectric properties , lead free ceramic, piezoelectric property, solid state reaction, true-remanent polarization

Procedia PDF Downloads 134
1287 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 151
1286 Evaluation of Monoterpenes Induction in Ugni molinae Ecotypes Subjected to a Red Grape Caterpillar (Lepidoptera: Arctiidae) Herbivory

Authors: Manuel Chacon-Fuentes, Leonardo Bardehle, Marcelo Lizama, Claudio Reyes, Andres Quiroz

Abstract:

The insect-plant interaction is a complex process in which the plant is able to release chemical signaling that modifies the behavior of insects. Insect herbivory can trigger mechanisms that allow the increase in the production of secondary metabolites that allow coping against the herbivores. Monoterpenes are a kind of secondary metabolites involved in direct defense acting as repellents of herbivorous or even in indirect defense acting as attractants for insect predators. In addition, an increase of the monoterpenes concentration is an effect commonly associated with the herbivory. Hence, plants subjected to damage by herbivory increase the monoterpenes production in comparison to plants without herbivory. In this framework, co-evolutionary aspects play a fundamental role in the adaptation of the herbivorous to their host and in the counter-adaptive strategies of the plants to avoid the herbivorous. In this context, Ugni molinae 'murtilla' is a native shrub from Chile characterized by its antioxidant activity mainly related to the phenolic compounds presents in its fruits. The larval stage of the red grape caterpillar Chilesia rudis Butler (Lepidoptera: Arctiidae) has been reported as an important defoliator of U. molinae. This insect is native from Chile and probably has been involved in a co-evolutionary process with murtilla. Therefore, we hypothesized that herbivory by the red grape caterpillar increases the emission of monoterpenes in Ugni molinae. Ecotypes 19-1 and 22-1 of murtilla were established and maintained at 25° C in the Laboratorio de Química Ecológica at Universidad de La Frontera. Red grape caterpillars of ⁓40 mm were collected near to Temuco (Chile) from grasses, and they were deprived of food for 24 h before performing the assays. Ten caterpillars were placed on the foliage of the ecotypes 19-1 and 22-1 and allowed to feed during 48 h. After this time, caterpillars were removed from the ecotypes and monoterpenes were collected. A glass chamber was used to enclose the ecotypes and a Porapak-Q column was used to trap the monoterpenes. After 24 h of capturing, columns were desorbed with hexane. Then, samples were injected in a gas chromatograph coupled to mass spectrometer and monoterpenes were determined according to the NIST library. All the experiments were performed in triplicate. Results showed that α-pinene, β-phellandrene, limonene, and 1,8 cineole were the main monoterpenes released by murtilla ecotypes. For the ecotype 19-1, the abundance of α-pinene was significantly higher in plants subjected to herbivory (100%) in relation to control plants (54.58%). Moreover, β-phellandrene and 1,8 cineole were observed only in control plants. For ecotype 22-1, there was no significant difference in monoterpenes abundance. In conclusion, the results suggest a trade-off of β-phellandrene and 1,8 cineole in response to herbivory damage by red grape caterpillar generating an increase in α-pinene abundance.

Keywords: Chilesia rudis, gas chromatography, monoterpenes, Ugni molinae

Procedia PDF Downloads 149
1285 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing

Authors: Jianan Sun, Ziwen Ye

Abstract:

Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.

Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection

Procedia PDF Downloads 130
1284 Determinants of Maternal Near-Miss among Women in Public Hospital Maternity Wards in Northern Ethiopia: A Facility Based Case-Control Study

Authors: Dejene Ermias Mekango, Mussie Alemayehu, Gebremedhin Berhe Gebregergs, Araya Abrha Medhanye, Gelila Goba

Abstract:

Background: Maternal near miss (MNM) can be used as a proxy indicator of maternal mortality ratio. There is a huge gap in life time risk between Sub-Saharan Africa and developed countries. In Ethiopia, a significant number of women die each year from complications during pregnancy, childbirth and the post-partum period. Besides, a few studies have been performed on MNM, and little is known regarding determinant factors. This study aims to identify determinants of MNM among women in Tigray region, Northern Ethiopia. Methods: a case-control study in hospital found in Tigray region, Ethiopia was conducted from January 30 - March 30, 2016. The sample included 103 cases and 205 controls recruited from women seeking obstetric care at six public hospitals. Clients having a life-threatening obstetric complication including haemorrhage, hypertensive diseases of pregnancy, dystocia, infections, and anemia or clinical signs of severe anemia in women without haemorrhage were taken as cases and those with normal obstetric outcomes were considered as controls. Cases were selected based on proportional to size allocation while systematic sampling was employed for controls. Data were analyzed using SPSS version 20.0. Binary and multiple variable logistic regression (odds ratio) analyses were calculated with 95% CI. Results: The largest proportion of cases and controls was among the ages of20–29 years, accounting for37.9 %( 39) of cases and 31.7 %( 65) of controls. Roughly 90% of cases and controls were married. About two-thirds of controls and 45.6 %( 47) of cases had gestational age between 37-41 weeks. History of chronic medical conditions was reported in 55.3 %(57) of cases and 33.2%(68) of controls. Women with no formal education [AOR=3.2;95%CI:1.24, 8.12],being less than 16 years old at first pregnancy [AOR=2.5; 95%CI:1.12,5.63],induced labor[AOR=3; 95%CI:1.44, 6.17], history of Cesarean section (C-section) [AOR=4.6; 95%CI: 1.98, 7.61] or chronic medical disorder[AOR=3.5;95%CI:1.78, 6.93], and women who traveled more than 60 minutes before reaching their final place of care[AOR=2.8;95% CI: 1.19,6.35] all had higher odds of experiencing MNM. Conclusions: The Government of Ethiopia should continue its effort to address the lack of road and health facility access as well as education, which will help reduce MNM. Work should also be continued to educate women and providers about common predictors of MNM like the history of C-section, chronic illness, and teenage pregnancy. These efforts should be carried out at the facility, community, and individual levels. The targeted follow-up to women with a history of chronic disease and C-section could also be a practical way to reduce MNM.

Keywords: maternal near miss, severe obstetric hemorrhage, hypertensive disorder, c-section, Tigray, Ethiopia

Procedia PDF Downloads 221
1283 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors

Authors: Louise Ödlund, Danica Djuric Ilic

Abstract:

Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.

Keywords: energy system, district heating, sustainable business strategies, sustainable development

Procedia PDF Downloads 169
1282 A Research Review on the Presence of Pesticide Residues in Apples Carried out in Poland in the Years 1980-2015

Authors: Bartosz Piechowicz, Stanislaw Sadlo, Przemyslaw Grodzicki, Magdalena Podbielska

Abstract:

Apples are popular fruits. They are eaten freshly and/or after processing. For instance Golden Delicious is an apple variety commonly used in production of foods for babies and toddlers. It is no wonder that complex analyses of the pesticide residue levels in those fruits have been carried out since eighties, and continued for the next years up to now. The results obtained were presented, usually as a teamwork, at the scientific sessions organised by the (IOR) Institute of Plant Protection-National Research Institute in Poznań and published in Scientific Works of the Institute (now Progress in Plant Protection/ Postępy w Ochronie Roślin) or Journal of Plant Protection Research, and in many non-periodical publications. These reports included studies carried out by IOR Laboratories in Poznań, Sośnicowice, Rzeszów and Bialystok. First detailed studies on the presence of pesticide residues in apple fruits by the laboratory in Rzeszów were published in 1991 in the article entitled 'The presence of pesticides in apples of late varieties from the area of south-eastern Poland in the years 1986-1989', in Annals of National Institute of Hygiene in Warsaw. These surveys gave the scientific base for business contacts between the Polish company Alima and the American company Gerber. At the beginning of XXI century, in Poland, systematic and complex studies on the deposition of pesticide residues in apples were initiated. First of all, the levels of active ingredients of plant protection products applied against storage diseases at 2-3 weeks before the harvest were determined. It is known that the above mentioned substances usually generate the highest residue levels. Also, the assessment of the fungicide residues in apples during their storage in controlled atmosphere and during their processing was carried out. Taking into account the need of actualisation the Maximum Residue Levels of pesticides, in force in Poland and in other European countries, and rationalisation of the ways of their determination, a lot of field tests on the behaviour of more important fungicides on the mature fruits just before their harvesting, were carried out. A rate of their disappearance and mathematical equation that showed the relationship between the residue level of any substance and the used dose, have been determined. The two parameters have allowed to evaluate the Maximum Residue Levels (MRLs) of pesticides, which were in force at that time, and to propose a coherent model of their determination in respect to the new substances. The obtained results were assessed in terms of the health risk for adult consumers and children, and to such determination of terms of treatment that mature apples could meet the rigorous level of 0.01 mg/kg.

Keywords: apple, disappearance, health risk, MRL, pesticide residue, research

Procedia PDF Downloads 273
1281 Extracting an Experimental Relation between SMD, Mass Flow Rate, Velocity and Pressure in Swirl Fuel Atomizers

Authors: Mohammad Hassan Ziraksaz

Abstract:

Fuel atomizers are used in a wide range of IC engines, turbojets and a variety of liquid propellant rocket engines. As the fuel spray fully develops its characters approach their ultimate amounts. Fuel spray characters such as SMD, injection pressure, mass flow rate, droplet velocity and spray cone angle play important roles to atomize the liquid fuel to finely atomized fuel droplets and finally form the fine fuel spray. Well performed, fully developed, fine spray without any defections, brings the idea of finding an experimental relation between the main effective spray characters. Extracting an experimental relation between SMD and other fuel spray physical characters in swirl fuel atomizers is the main scope of this experimental work. Droplet velocity, fuel mass flow rate, SMD and spray cone angle are the parameters which are measured. A set of twelve reverse engineering atomizers without any spray defections and a set of eight original atomizers as referenced well-performed spray are contributed in this work. More than 350 tests, mostly repeated, were performed. This work shows that although spray cone angle plays a very effective role in spray formation, after formation, it smoothly approaches to an almost constant amount while the other characters are changed to create fine droplets. Therefore, the work to find the relation between the characters is focused on SMD, droplet velocity, fuel mass flow rate, and injection pressure. The process of fuel spray formation begins in 5 Psig injection pressures, where a tiny fuel onion attaches to the injector tip and ended in 250 Psig injection pressure, were fully developed fine fuel spray forms. Injection pressure is gradually increased to observe how the spray forms. In each step, all parameters are measured and recorded carefully to provide a data bank. Various diagrams have been drawn to study the behavior of the parameters in more detail. Experiments and graphs show that the power equation can best show changes in parameters. The SMD experimental relation with pressure P, fuel mass flow rate Q ̇ and droplet velocity V extracted individually in pairs. Therefore, the proportional relation of SMD with other parameters is founded. Now it is time to find an experimental relation including all the parameters. Using obtained proportional relation, replacing the parameters with experimentally measured ones and drawing the graphs of experimental SMD versus proportion SMD (〖SMD〗_P), a correctional equation and consequently the final experimental equation is obtained. This experimental equation is specified to use for swirl fuel atomizers and the use of this experimental equation in different conditions shows about 3% error, which is expected to achieve lower error and consequently higher accuracy by increasing the number of experiments and increasing the accuracy of data collection.

Keywords: droplet velocity, experimental relation, mass flow rate, SMD, swirl fuel atomizer

Procedia PDF Downloads 160
1280 Cellular Targeting to Dual Gaseous Microenvironments by Polydimethylsiloxane Microchip

Authors: Samineh Barmaki, Ville Jokinen, Esko Kankuri

Abstract:

We report a microfluidic chip that can be used to modify the gaseous microenvironment of a cell-culture in ambient atmospheric conditions. The aim of the study is to show the cellular response to nitric oxide (NO) under hypoxic (oxygen < 5%) condition. Simultaneously targeting to hypoxic and nitric oxide will provide an opportunity for NO‑based therapeutics. Studies on cellular responses to lowered oxygen concentration or to gaseous mediators are usually carried out under a specific macro environment, such as hypoxia chambers, or with specific NO donor molecules that may have additional toxic effects. In our study, the chip consists of a microfluidic layer and a cell culture well, separated by a thin gas permeable polydimethylsiloxane (PDMS) membrane. The main design goal is to separate the gas oxygen scavenger and NO donor solutions, which are often toxic, from the cell media. Two different types of gas exchangers, titled 'pool' and 'meander' were tested. We find that the pool design allows us to reach a higher level of oxygen depletion than meander (24.32 ± 19.82 %vs -3.21 ± 8.81). Our microchip design can make the cells culture more simple and makes it easy to adapt existing cell culture protocols. Our first application is utilizing the chip to create hypoxic conditions on targeted areas of cell culture. In this study, oxygen scavenger sodium sulfite generates hypoxia and its effect on human embryonic kidney cells (HEK-293). The PDMS membrane was coated with fibronectin before initiating cell cultures, and the cells were grown for 48h on the chips before initiating the gas control experiments. The hypoxia experiments were performed by pumping of O₂-depleted H₂O into the microfluidic channel with a flow-rate of 0.5 ml/h. Image-iT® reagent as an oxygen level responser was mixed with HEK-293 cells. The fluorescent signal appears on cells stained with Image-iT® hypoxia reagent (after 6h of pumping oxygen-depleted H₂O through the microfluidic channel in pool area). The exposure to different levels of O₂ can be controlled by varying the thickness of the PDMS membrane. Recently, we improved the design of the microfluidic chip, which can control the microenvironment of two different gases at the same time. The hypoxic response was also improved from the new design of microchip. The cells were grown on the thin PDMS membrane for 30 hours, and with a flowrate of 0.1 ml/h; the oxygen scavenger was pumped into the microfluidic channel. We also show that by pumping sodium nitroprusside (SNP) as a nitric oxide donor activated under light and can generate nitric oxide on top of PDMS membrane. We are aiming to show cellular microenvironment response of HEK-293 cells to both nitric oxide (by pumping SNP) and hypoxia (by pumping oxygen scavenger solution) in separated channels in one microfluidic chip.

Keywords: hypoxia, nitric oxide, microenvironment, microfluidic chip, sodium nitroprusside, SNP

Procedia PDF Downloads 132
1279 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 74
1278 Investigation of a Technology Enabled Model of Home Care: the eShift Model of Palliative Care

Authors: L. Donelle, S. Regan, R. Booth, M. Kerr, J. McMurray, D. Fitzsimmons

Abstract:

Palliative home health care provision within the Canadian context is challenged by: (i) a shortage of registered nurses (RN) and RNs with palliative care expertise, (ii) an aging population, (iii) reliance on unpaid family caregivers to sustain home care services with limited support to conduct this ‘care work’, (iv) a model of healthcare that assumes client self-care, and (v) competing economic priorities. In response, an interprofessional team of service provider organizations, a software/technology provider, and health care providers developed and implemented a technology-enabled model of home care, the eShift model of palliative home care (eShift). The eShift model combines communication and documentation technology with non-traditional utilization of health human resources to meet patient needs for palliative care in the home. The purpose of this study was to investigate the structure, processes, and outcomes of the eShift model of care. Methodology: Guided by Donebedian’s evaluation framework for health care, this qualitative-descriptive study investigated the structure, processes, and outcomes care of the eShift model of palliative home care. Interviews and focus groups were conducted with health care providers (n= 45), decision-makers (n=13), technology providers (n=3) and family care givers (n=8). Interviews were recorded, transcribed, and a deductive analysis of transcripts was conducted. Study Findings (1) Structure: The eShift model consists of a remotely-situated RN using technology to direct care provision virtually to patients in their home. The remote RN is connected virtually to a health technician (an unregulated care provider) in the patient’s home using real-time communication. The health technician uses a smartphone modified with the eShift application and communicates with the RN who uses a computer with the eShift application/dashboard. Documentation and communication about patient observations and care activities occur in the eShift portal. The RN is typically accountable for four to six health technicians and patients over an 8-hour shift. The technology provider was identified as an important member of the healthcare team. Other members of the team include family members, care coordinators, nurse practitioners, physicians, and allied health. (2) Processes: Conventionally, patient needs are the focus of care; however within eShift, the patient and the family caregiver were the focus of care. Enhanced medication administration was seen as one of the most important processes, and family caregivers reported high satisfaction with the care provided. There was perceived enhanced teamwork among health care providers. (3) Outcomes: Patients were able to die at home. The eShift model enabled consistency and continuity of care, and effective management of patient symptoms and caregiver respite. Conclusion: More than a technology solution, the eShift model of care was viewed as transforming home care practice and an innovative way to resolve the shortage of palliative care nurses within home care.

Keywords: palliative home care, health information technology, patient-centred care, interprofessional health care team

Procedia PDF Downloads 414
1277 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 145
1276 Process Safety Management Digitalization via SHEQTool based on Occupational Safety and Health Administration and Center for Chemical Process Safety, a Case Study in Petrochemical Companies

Authors: Saeed Nazari, Masoom Nazari, Ali Hejazi, Siamak Sanoobari Ghazi Jahani, Mohammad Dehghani, Javad Vakili

Abstract:

More than ever, digitization is an imperative for businesses to keep their competitive advantages, foster innovation and reduce paperwork. To design and successfully implement digital transformation initiatives within process safety management system, employees need to be equipped with the right tool, frameworks, and best practices. we developed a unique full stack application so-called SHEQTool which is entirely dynamic based on our extensive expertise, experience, and client feedback to help business processes particularly operations safety management. We use our best knowledge and scientific methodologies published by CCPS and OSHA Guidelines to streamline operations and integrated them into task management within Petrochemical Companies. We digitalize their main process safety management system elements and their sub elements such as hazard identification and risk management, training and communication, inspection and audit, critical changes management, contractor management, permit to work, pre-start-up safety review, incident reporting and investigation, emergency response plan, personal protective equipment, occupational health, and action management in a fully customizable manner with no programming needs for users. We review the feedback from main actors within petrochemical plant which highlights improving their business performance and productivity as well as keep tracking their functions’ key performance indicators (KPIs) because it; 1) saves time, resources, and costs of all paperwork on our businesses (by Digitalization); 2) reduces errors and improve performance within management system by covering most of daily software needs of the organization and reduce complexity and associated costs of numerous tools and their required training (One Tool Approach); 3) focuses on management systems and integrate functions and put them into traceable task management (RASCI and Flowcharting); 4) helps the entire enterprise be resilient to any change of your processes, technologies, assets with minimum costs (through Organizational Resilience); 5) reduces significantly incidents and errors via world class safety management programs and elements (by Simplification); 6) gives the companies a systematic, traceable, risk based, process based, and science based integrated management system (via proper Methodologies); 7) helps business processes complies with ISO 9001, ISO 14001, ISO 45001, ISO 31000, best practices as well as legal regulations by PDCA approach (Compliance).

Keywords: process, safety, digitalization, management, risk, incident, SHEQTool, OSHA, CCPS

Procedia PDF Downloads 63
1275 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement

Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies

Abstract:

Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.

Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement

Procedia PDF Downloads 255
1274 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 147
1273 Linkages between Innovation Policies and SMEs' Innovation Activities: Empirical Evidence from 15 Transition Countries

Authors: Anita Richter

Abstract:

Innovation is one of the key foundations of competitive advantage, generating growth and welfare worldwide. Consequently, all firms should innovate to bring new ideas to the market. Innovation is a vital growth driver, particularly for transition countries to move towards knowledge-based, high-income economies. However, numerous barriers, such as financial, regulatory or infrastructural constraints prevent, in particular, new and small firms in transition countries from innovating. Thus SMEs’ innovation output may benefit substantially from government support. This research paper aims to assess the effect of government interventions on innovation activities in SMEs in emerging countries. Until now academic research related to the innovation policies focused either on single country and/or high-income countries assessments and less on cross-country and/or low and middle-income countries. Therefore the paper seeks to close the research gap by providing empirical evidence from 8,500 firms in 15 transition countries (Eastern Europe, South Caucasus, South East Europe, Middle East and North Africa). Using firm-level data from the Business Environment and Enterprise Performance Survey of the World Bank and EBRD and policy data from the SME Policy Index of the OECD, the paper investigates how government interventions affect SME’s likelihood of investing in any technological and non-technological innovation. Using the Standard Linear Regression, the impact of government interventions on SMEs’ innovation output and R&D activities is measured. The empirical analysis suggests that a firm’s decision to invest into innovative activities is sensitive to government interventions. A firm’s likelihood to invest into innovative activities increases by 3% to 8%, if the innovation eco-system noticeably improves (measured by an increase of 1 level in the SME Policy Index). At the same time, a better eco-system encourages SMEs to invest more in R&D. Government reforms in establishing a dedicated policy framework (IP legislation), institutional infrastructure (science and technology parks, incubators) and financial support (public R&D grants, innovation vouchers) are particularly relevant to stimulate innovation performance in SMEs. Particular segments of the SME population, namely micro and manufacturing firms, are more likely to benefit from an increased innovation framework conditions. The marginal effects are particularly strong on product innovation, process innovation, and marketing innovation, but less on management innovation. In conclusion, government interventions supporting innovation will likely lead to higher innovation performance of SMEs. They increase productivity at both firm and country level, which is a vital step in transitioning towards knowledge-based market economies.

Keywords: innovation, research and development, government interventions, economic development, small and medium-sized enterprises, transition countries

Procedia PDF Downloads 324
1272 USBware: A Trusted and Multidisciplinary Framework for Enhanced Detection of USB-Based Attacks

Authors: Nir Nissim, Ran Yahalom, Tomer Lancewiki, Yuval Elovici, Boaz Lerner

Abstract:

Background: Attackers increasingly take advantage of innocent users who tend to use USB devices casually, assuming these devices benign when in fact they may carry an embedded malicious behavior or hidden malware. USB devices have many properties and capabilities that have become the subject of malicious operations. Many of the recent attacks targeting individuals, and especially organizations, utilize popular and widely used USB devices, such as mice, keyboards, flash drives, printers, and smartphones. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched via USB devices. Significance: We propose USBWARE, a project that focuses on the vulnerabilities of USB devices and centers on the development of a comprehensive detection framework that relies upon a crucial attack repository. USBWARE will allow researchers and companies to better understand the vulnerabilities and attacks associated with USB devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The framework of USBWARE is aimed at accurate detection of both known and unknown USB-based attacks by a process that efficiently enhances the framework's detection capabilities over time. The framework will integrate two main security approaches in order to enhance the detection of USB-based attacks associated with a variety of USB devices. The first approach is aimed at the detection of known attacks and their variants, whereas the second approach focuses on the detection of unknown attacks. USBWARE will consist of six independent but complimentary detection modules, each detecting attacks based on a different approach or discipline. These modules include novel ideas and algorithms inspired from or already developed within our team's domains of expertise, including cyber security, electrical and signal processing, machine learning, and computational biology. The establishment and maintenance of the USBWARE’s dynamic and up-to-date attack repository will strengthen the capabilities of the USBWARE detection framework. The attack repository’s infrastructure will enable researchers to record, document, create, and simulate existing and new USB-based attacks. This data will be used to maintain the detection framework’s updatability by incorporating knowledge regarding new attacks. Based on our experience in the cyber security domain, we aim to design the USBWARE framework so that it will have several characteristics that are crucial for this type of cyber-security detection solution. Specifically, the USBWARE framework should be: Novel, Multidisciplinary, Trusted, Lightweight, Extendable, Modular and Updatable and Adaptable. Major Findings: Based on our initial survey, we have already found more than 23 types of USB-based attacks, divided into six major categories. Our preliminary evaluation and proof of concepts showed that our detection modules can be used for efficient detection of several basic known USB attacks. Further research, development, and enhancements are required so that USBWARE will be capable to cover all of the major known USB attacks and to detect unknown attacks. Conclusion: USBWARE is a crucial detection framework that must be further enhanced and developed.

Keywords: USB, device, cyber security, attack, detection

Procedia PDF Downloads 396
1271 Rural Entrepreneurship as a Response to Climate Change and Resource Conservation

Authors: Omar Romero-Hernandez, Federico Castillo, Armando Sanchez, Sergio Romero, Andrea Romero, Michael Mitchell

Abstract:

Environmental policies for resource conservation in rural areas include subsidies on services and social programs to cover living expenses. Government's expectation is that rural communities who benefit from social programs, such as payment for ecosystem services, are provided with an incentive to conserve natural resources and preserve natural sinks for greenhouse gases. At the same time, global climate change has affected the lives of people worldwide. The capability to adapt to global warming depends on the available resources and the standard of living, putting rural communities at a disadvantage. This paper explores whether rural entrepreneurship can represent a solution to resource conservation and global warming adaptation in rural communities. The research focuses on a sample of two coffee communities in Oaxaca, Mexico. Researchers used geospatial information contained in aerial photographs of the geographical areas of interest. Households were identified in the photos via the roofs of households and georeferenced via coordinates. From the household population, a random selection of roofs was performed and received a visit. A total of 112 surveys were completed, including questions of socio-demographics, perception to climate change and adaptation activities. The population includes two groups of study: entrepreneurs and non-entrepreneurs. Data was sorted, filtered, and validated. Analysis includes descriptive statistics for exploratory purposes and a multi-regression analysis. Outcomes from the surveys indicate that coffee farmers, who demonstrate entrepreneurship skills and hire employees, are more eager to adapt to climate change despite the extreme adverse socioeconomic conditions of the region. We show that farmers with entrepreneurial tendencies are more creative in using innovative farm practices such as the planting of shade trees, the use of live fencing, instead of wires, and watershed protection techniques, among others. This result counters the notion that small farmers are at the mercy of climate change and have no possibility of being able to adapt to a changing climate. The study also points to roadblocks that farmers face when coping with climate change. Among those roadblocks are a lack of extension services, access to credit, and reliable internet, all of which reduces access to vital information needed in today’s constantly changing world. Results indicate that, under some circumstances, funding and supporting entrepreneurship programs may provide more benefit than traditional social programs.

Keywords: entrepreneurship, global warming, rural communities, climate change adaptation

Procedia PDF Downloads 239
1270 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 103
1269 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 68
1268 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 186
1267 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 102
1266 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine

Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski

Abstract:

Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: electric vehicle, power generator, range extender, Wankel engine

Procedia PDF Downloads 156