Search results for: varying an axial load
573 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 113572 Spatial Distribution, Characteristics, and Pollution Risk Assessment of Microplastics in Sediments from Karnaphuli River Estuary, Bangladesh
Authors: Md. Refat Jahan Rakiba, M. Belal Hossaina, Rakesh Kumarc, Md. Akram Ullaha, Sultan Al Nahiand, Nazmun Naher Rimaa, Tasrina Rabia Choudhury, Samia Islam Libaf, Jimmy Yub, Mayeen Uddin Khandakerg, Abdelmoneim Suliemanh, Mohamed Mahmoud Sayedi
Abstract:
Microplastics (MPs) have become an emerging global pollutant due to their wide spread and dispersion and potential threats to marine ecosystems. However, studies on MPs of estuarine and coastal ecosystems of Bangladesh are very limited or not available. In this study, we conducted the first study on the abundance, distribution, characteristics and potential risk assessment of microplastics in the sediment of Karnaphuli River estuary, Bangladesh. Microplastic particles were extracted from sediments of 30 stations along the estuary by density separation, and then enumerated and characterize by using steromicroscope and Fourier Transform Infrared (FT-IR) spectroscopy. In the collected sediment, the number of MPs varied from 22.29 - 59.5 items kg−1 of dry weight (DW) with an average of 1177 particles kg−1 DW. The mean abundance was higher in the downstream and left bank of estuary where the predominant shape, colour, and size of MPs were films (35%), white (19%), and >5000 μm (19%), respectively. The main polymer types were polyethylene terephthalate, polystyrene, polyethylene, cellulose, and nylon. MPs were found to pose risks (low to high) in the sediment of the estuary, with the highest risk occuring at one station near a sewage outlet, according to the results of risk analyses using the pollution risk index (PRI), polymer risk index (H), contamination factors (CFs), and pollution load index (PLI). The single value index, PLI clearly demonastated that all sampling sites were considerably polluted (as PLI >1) with microplastics. H values showed toxic polymers even in lower proportions possess higher polymeric hazard scores and vice versa. This investigation uncovered new insights on the status of MPs in the sediments of Karnaphuli River estuary, laying the groundwork for future research and control of microplastic pollution and management.Keywords: microplastics, polymers, pollution risk assessment, Karnaphuli esttuary
Procedia PDF Downloads 79571 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis
Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman
Abstract:
Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness
Procedia PDF Downloads 85570 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 236569 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 106568 Threading Professionalism Through Occupational Therapy Curriculum: A Framework and Resources
Authors: Ashley Hobson, Ashley Efaw
Abstract:
Professionalism is an essential skill for clinicians, particularly for Occupational Therapy Providers (OTPs). The World Federation of Occupational Therapy (WFOT) Guiding Principles for Ethical Occupational Therapy and American Occupational Therapy Association (AOTA) Code of Ethics establishes expectations for professionalism among OTPs, emphasizing its importance in the field. However, the teaching and assessment of professionalism vary across OTP programs. The flexibility provided by the country standards allows programs to determine their own approaches to meeting these standards, resulting in inconsistency. Educators in both academic and fieldwork settings face challenges in objectively assessing and providing feedback on student professionalism. Although they observe instances of unprofessional behavior, there is no standardized assessment measure to evaluate professionalism in OTP students. While most students are committed to learning and applying professionalism skills, they enter OTP programs with varying levels of proficiency in this area. Consequently, they lack a uniform understanding of professionalism and lack an objective means to self-assess their current skills and identify areas for growth. It is crucial to explicitly teach professionalism, have students to self-assess their professionalism skills, and have OTP educators assess student professionalism. This approach is necessary for fostering students' professionalism journeys. Traditionally, there has been no objective way for students to self-assess their professionalism or for educators to provide objective assessments and feedback. To establish a uniform approach to professionalism, the authors incorporated professionalism content into our curriculum. Utilizing an operational definition of professionalism, the authors integrated professionalism into didactic, fieldwork, and capstone courses. The complexity of the content and the professionalism skills expected of students increase each year to ensure students graduate with the skills to practice in accordance with the WFOT Guiding Principles for Ethical Occupational Therapy Practice and AOTA Code of Ethics. Two professionalism assessments were developed based on the expectations outlined in the both documents. The Professionalism Self-Assessment allows students to evaluate their professionalism, reflect on their performance, and set goals. The Professionalism Assessment for Educators is a modified version of the same tool designed for educators. The purpose of this workshop is to provide educators with a framework and tools for assessing student professionalism. The authors discuss how to integrate professionalism content into OTP curriculum and utilize professionalism assessments to provide constructive feedback and equitable learning opportunities for OTP students in academic, fieldwork, and capstone settings. By adopting these strategies, educators can enhance the development of professionalism among OTP students, ensuring they are well-prepared to meet the demands of the profession.Keywords: professionalism, assessments, student learning, student preparedness, ethical practice
Procedia PDF Downloads 41567 TiO2 Solar Light Photocatalysis a Promising Treatment Method of Wastewater with Trinitrotoluene Content
Authors: Ines Nitoi, Petruta Oancea, Lucian Constantin, Laurentiu Dinu, Maria Crisan, Malina Raileanu, Ionut Cristea
Abstract:
2,4,6-Trinitrotoluene (TNT) is the most common pollutant identified in wastewater generated from munitions plants where this explosive is synthesized or handled (munitions load, assembly and pack operations). Due to their toxic and suspected carcinogenic characteristics, nitroaromatic compounds like TNT are included on the list of prioritary pollutants and strictly regulated in EU countries. Since their presence in water bodies is risky for human health and aquatic life, development of powerful, modern treatment methods like photocatalysis are needed in order to assures environmental pollution mitigation. The photocatalytic degradation of TNT was carried out at pH=7.8, in aqueous TiO2 based catalyst suspension, under sunlight irradiation. The enhanced photo activity of catalyst in visible domain was assured by 0.5% Fe doping. TNT degradation experiments were performed using a tubular collector type solar photoreactor (26 UV permeable silica glass tubes series connected), plug in a total recycle loops. The influence of substrate concentration and catalyst dose on the pollutant degradation and mineralization by-products (NO2-, NO3-, NH4+) formation efficiencies was studied. In order to compare the experimental results obtained in various working conditions, the pollutant and mineralization by-products measured concentrations have been considered as functions of irradiation time and cumulative photonic energy Qhν incident on the reactor surface (kJ/L). In the tested experimental conditions, at tens mg/L pollutant concentration, increase of 0,5%-TiO2 dose up to 200mg/L leads to the enhancement of CB degradation efficiency. Since, doubling of TNT content has a negative effect on pollutant degradation efficiency, in similar experimental condition, prolonged irradiation time from 360 to 480 min was necessary in order to assures the compliance of treated effluent with limits imposed by EU legislation (TNT ≤ 10µg/L).Keywords: wastewater treatment, TNT, photocatalysis, environmental engineering
Procedia PDF Downloads 355566 Overcoming Barriers to Improve HIV Education and Public Health Outcomes in the Democratic Republic of Congo
Authors: Danielle A. Walker, Kyle L. Johnson, Tara B. Thomas, Sandor Dorgo, Jacen S. Moore
Abstract:
Approximately 37 million people worldwide are infected with the Human Immunodeficiency Virus (HIV), with the majority located in sub-Saharan Africa. The relationship existing between HIV incidence and socioeconomic inequity confirms the critical need for programs promoting HIV education, prevention and treatment access. This literature review analyzed 36 sources with a specific focus on the Democratic Republic of Congo, whose critically low socioeconomic status and education rate have resulted in a drastically high HIV rates. Relationships between HIV testing and treatment and barriers to care were explored. Cultural and religious considerations were found to be vital when creating and implementing HIV education and testing programs. Partnerships encouraging active support from community-based spiritual leaders to implement HIV educational programs were also key mechanisms to reach communities and individuals. Gender roles were highlighted as a key component for implementation of effective community trust-building and successful HIV education programs. The efficacy of added support by hospitals and clinics in rural areas to facilitate access to HIV testing and care for people living with HIV/AIDS (PLWHA) was discussed. This review highlighted the need for healthcare providers to provide a network of continued education for PLWHA in clinical settings during disclosure and throughout the course of treatment to increase retention in care and promote medication adherence for viral load suppression. Implementation of culturally sensitive models that rely on community familiarity with HIV educators such as ‘train-the-trainer’ were also proposed as efficacious tools for educating rural communities about HIV. Further research is needed to promote community partnerships for HIV education, understand the cultural context of gender roles as barriers to care, and empower local health care providers to be successful within the HIV Continuum of Care.Keywords: cultural sensitivity, Democratic Republic of the Congo, education, HIV
Procedia PDF Downloads 273565 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 123564 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring
Procedia PDF Downloads 227563 Early Influences on Teacher Identity: Perspectives from the USA and Northern Ireland
Authors: Martin Hagan
Abstract:
Teacher identity has been recognised as a crucial field of research which supports understanding of the ways in which teachers navigate the complexities of professional life in order to grow in competence, knowledge and practice. As a field of study, teacher identity is concerned with understanding: how identity is defined; how it develops; how teachers make sense of their emerging identity; and how the act of teaching is mediated through the individual teacher’s values, beliefs and sense of professional self. By comparing two particular, socially constructed learning contexts or ‘learning milieu’, one in Northern Ireland and the other in the United States of America, this study aims specifically, to gain better understanding of how teacher identity develops during the initial phase of teacher education. The comparative approach was adopted on the premise that experiences are constructed through interactive, socio-historical and cultural negotiations with others within particular environments, situations and contexts. As such, whilst the common goal is to ‘become’ a teacher, the nuances emerging from the different learning milieu highlight variance in discourse, priorities, practice and influence. A qualitative, interpretative research design was employed to understand the world-constructions of the participants through asking open-ended questions, seeking views and perspectives, examining contexts and eventually deducing meaning. Data were collected using semi structured interviews from a purposive sample of student teachers (n14) in either the first or second year of study in their respective institutions. In addition, a sample of teacher educators (n5) responsible for the design, organisation and management of the programmes were also interviewed. Inductive thematic analysis was then conducted, which highlighted issues related to: the participants’ personal dispositions, prior learning experiences and motivation; the influence of the teacher education programme on the participants’ emerging professional identity; and the extent to which the experiences of working with teachers and pupils in schools in the context of the practicum, challenged and changed perspectives on teaching as a professional activity. The study also highlights the varying degrees of influence exercised by the different roles (tutor, host teacher/mentor, student) within the teacher-learning process across the two contexts. The findings of the study contribute to the understanding of teacher identity development in the early stages of professional learning. By so doing, the research makes a valid contribution to the discourse on initial teacher preparation and can help to better inform teacher educators and policy makers in relation to appropriate strategies, approaches and programmes to support professional learning and positive teacher identity formation.Keywords: initial teacher education, professional learning, professional growth, teacher identity
Procedia PDF Downloads 71562 Analysis of Impact of Airplane Wheels Pre-Rotating on Landing Gears of Large Airplane
Authors: Huang Bingling, Jia Yuhong, Liu Yanhui
Abstract:
As an important part of aircraft, landing gears are responsible for taking-off and landing function. In recent years, big airplane's structural quality increases a lot. As a result, landing gears have stricter technical requirements than ever before such as structure strength and etc. If the structural strength of the landing gear is enhanced through traditional methods like increasing structural quality, the negative impacts on the landing gear's function would be very serious and even counteract the positive effects. Thus, in order to solve this problem, the impact of pre-rotating of landing gears on performance of landing gears is studied from the theoretical and experimental verification in this paper. By increasing the pre-rotating speed of the wheel, it can improve the performance of the landing gear and reduce the structural quality, the force of joint parts and other properties. In addition, the pre-rotating of the wheels also has other advantages, such as reduce the friction between wheels and ground and extend the life of the wheel. In this paper, the impact of the pre-rotating speed on landing gears and the connecting between landing gears performance and pre-rotating speed would be researched in detail. This paper is divided into three parts. In the first part, large airplane landing gear model is built by CATIA and LMS. As most general landing gear type in big plane, four-wheel landing gear is picked as model. The second part is to simulate the process of landing in LMS motion, and study the impact of pre-rotating of wheels on the aircraft`s properties, including the buffer stroke, efficiency, power; friction, displacement and relative speed between piston and sleeve; force and load distribution of tires. The simulation results show that the characteristics of the different pre-rotation speed are understood. The third part is conclusion. Through the data of the previous simulation and the relationship between the pre-rotation speed of the aircraft wheels and the performance of the aircraft, recommended speed interval is proposed. This paper is of great theoretical value to improve the performance of large airplane. It is a very effective method to improve the performance of aircraft by setting wheel pre-rotating speed. Do not need to increase the structural quality too much, eliminating the negative effects of traditional methods.Keywords: large airplane, landing gear, pre-rotating, simulation
Procedia PDF Downloads 340561 Development of Agomelatine Loaded Proliposomal Powders for Improved Intestinal Permeation: Effect of Surface Charge
Authors: Rajasekhar Reddy Poonuru, Anusha Parnem
Abstract:
Purpose: To formulate proliposome powder of agomelatine, an antipsychotic drug, and to evaluate physicochemical, in vitro characters and effect of surface charge on ex vivo intestinal permeation. Methods: Film deposition technique was employed to develop proliposomal powders of agomelatin with varying molar ratios of lipid Hydro Soy PC L-α-phosphatidylcholine (HSPC) and cholesterol with fixed sum of drug. With the aim to derive free flowing and stable proliposome powder, fluid retention potential of various carriers was examined. Liposome formation and number of vesicles formed for per mm3 up on hydration, vesicle size, and entrapment efficiency was assessed to deduce an optimized formulation. Sodium cholate added to optimized formulation to induce surface charge on formed vesicles. Solid-state characterization (FTIR, DSC, and XRD) was performed with the intention to assess native crystalline and chemical behavior of drug. The in vitro dissolution test of optimized formulation along with pure drug was evaluated to estimate dissolution efficiency (DE) and relative dissolution rate (RDR). Effective permeability co-efficient (Peff(rat)) in rat and enhancement ratio (ER) of drug from formulation and pure drug dispersion were calculated from ex vivo permeation studies in rat ileum. Results: Proliposomal powder formulated with equimolar ratio of HSPC and cholesterol ensued in higher no. of vesicles (3.95) with 90% drug entrapment up on hydration. Neusilin UFL2 was elected as carrier because of its high fluid retention potential (4.5) and good flow properties. Proliposome powder exhibited augmentation in DE (60.3 ±3.34) and RDR (21.2±01.02) of agomelation over pure drug. Solid state characterization studies demonstrated the transformation of native crystalline form of drug to amorphous and/or molecular state, which was in correlation with results obtained from in vitro dissolution test. The elevated Peff(rat) of 46.5×10-4 cm/sec and ER of 2.65 of drug from charge induced proliposome formulation with respect to pure drug dispersion was assessed from ex vivo intestinal permeation studies executed in ileum of wistar rats. Conclusion: Improved physicochemical characters and ex vivo intestinal permeation of drug from charge induced proliposome powder with Neusilin UFL2 unravels the potentiality of this system in enhancing oral delivery of agomelatin.Keywords: agomelatin, proliposome, sodium cholate, neusilin
Procedia PDF Downloads 134560 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots
Authors: Mrinalini Ranjan, Sudheesh Chethil
Abstract:
Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots
Procedia PDF Downloads 173559 Microbial Load, Prevalence and Antibiotic Resistance of Microflora Isolated from the Ghanaian Paper Currency Note: A Potential Health Threat
Authors: Simon Nyarko
Abstract:
This study examined the microbial flora contamination of the Ghanaian paper currency notes and antibiotic resistance in Ejura Municipal, Ashanti Region, Ghana. This is a descriptive cross-sectional study designed to assess the profile of microflora contamination of the Ghanaian paper currency notes and antibiotic-resistant in the Ejura Municipality. The research was conducted in Ejura, a town in the Ejura Sekyeredumase Municipal of the Ashanti region of Ghana. 70 paper currency notes which were freshly collected from the bank, consisting of 15 pieces of GH ¢1, GH ¢2, and GH ¢5, 10 pieces of GH ¢10 and GH ¢20, and 5 pieces of GH ¢50, were randomly sampled from people by exchanging their money in usage with those freshly secured from the bank. The surfaces of each GH¢ note were gently swabbed and sent to the lab immediately in sterile Zip Bags and sealed, and tenfold serial dilution was inoculated on plate count agar (PCA), MacConkey agar (MCA), mannitol salt agar (MSA), and deoxycholate citrate agar (DCA). For bacterial identification, the study used appropriate laboratory and biochemical tests. The data was analyzed using SPSS-IBM version 20.0. It was found that 95.2 % of the 70 GH¢ notes tested positive for one or more bacterial isolates. On each GH¢ note, mean counts on PCA ranged from 3.0 cfu/ml ×105 to 4.8 cfu/ml ×105. Of 124 bacteria isolated. 36 (29.03 %), 32 (25.81%), 16 (12.90 %), 20 (16.13%), 13 (10.48 %), and 7 (5.66 %) were from GH¢1, GH¢2, GH¢10, GH¢5, GH¢20, and GH¢50, respectively. Bacterial isolates were Escherichia coli (25.81%), Staphylococcus aureus (18.55%), coagulase-negative Staphylococcus (15.32%), Klebsiella species (12.10%), Salmonella species (9.68%), Shigella species (8.06%), Pseudomonas aeruginosa (7.26%), and Proteus species (3.23%). Meat shops, commercial drivers, canteens, grocery stores, and vegetable shops contributed 25.81 %, 20.16 %, 19.35 %, 17.74 %, and 16.94 % of GH¢ notes, respectively. There was 100% resistance of the isolates to Erythromycin (ERY), and Cotrimoxazole (COT). Amikacin (AMK) was the most effective among the antibiotics as 75% of the isolates were susceptible to it. This study has demonstrated that the Ghanaian paper currency notes are heavily contaminated with potentially pathogenic bacteria that are highly resistant to the most widely used antibiotics and are a threat to public health.Keywords: microflora, antibiotic resistance, staphylococcus aureus, culture media, multi-drug resistance
Procedia PDF Downloads 106558 The Effects of Billboard Content and Visible Distance on Driver Behavior
Authors: Arsalan Hassan Pour, Mansoureh Jeihani, Samira Ahangari
Abstract:
Distracted driving has been one of the most integral concerns surrounding our daily use of vehicles since the invention of the automobile. While much attention has been recently given to cell phones related distraction, commercial billboards along roads are also candidates for drivers' visual and cognitive distractions, as they may take drivers’ eyes from the road and their minds off the driving task to see, perceive and think about the billboard’s content. Using a driving simulator and a head-mounted eye-tracking system, speed change, acceleration, deceleration, throttle response, collision, lane changing, and offset from the center of the lane data along with gaze fixation duration and frequency data were collected in this study. Some 92 participants from a fairly diverse sociodemographic background drove on a simulated freeway in Baltimore, Maryland area and were exposed to three different billboards to investigate the effects of billboards on drivers’ behavior. Participants glanced at the billboards several times with different frequencies, the maximum of which occurred on the billboard with the highest cognitive load. About 74% of the participants didn’t look at billboards for more than two seconds at each glance except for the billboard with a short visible area. Analysis of variance (ANOVA) was performed to find the variations in driving behavior when they are invisible, readable, and post billboards area. The results show a slight difference in speed, throttle, brake, steering velocity, and lane changing, among different areas. Brake force and deviation from the center of the lane increased in the readable area in comparison with the visible area, and speed increased right after each billboard. The results indicated that billboards have a significant effect on driving performance and visual attention based on their content and visibility status. Generalized linear model (GLM) analysis showed no connection between participants’ age and driving experience with gaze duration. However, the visible distance of the billboard, gender, and billboard content had a significant effect on gaze duration.Keywords: ANOVA, billboards, distracted driving, drivers' behavior, driving simulator, eye-Tracking system, GLM
Procedia PDF Downloads 126557 Structure and Mechanics Patterns in the Assembly of Type V Intermediate-Filament Protein-Based Fibers
Authors: Mark Bezner, Shani Deri, Tom Trigano, Kfir Ben-Harush
Abstract:
Intermediate filament (IF) proteins-based fibers are among the toughest fibers in nature, as was shown by native hagfish slime threads and by synthetic fibers that are based on type V IF-proteins, the nuclear lamins. It is assumed that their mechanical performance stems from two major factors: (1) the transition from elastic -helices to stiff-sheets during tensile load; and (2) the specific organization of the coiled-coil proteins into a hierarchical network of nano-filaments. Here, we investigated the interrelationship between these two factors by using wet-spun fibers based on C. elegans (Ce) lamin. We found that Ce-lamin fibers, whether assembled in aqueous or alcoholic solutions, had the same nonlinear mechanical behavior, with the elastic region ending at ~5%. The pattern of the transition was, however, different: the ratio between -helices and -sheets/random coils was relatively constant until a 20% strain for fibers assembled in an aqueous solution, whereas for fibers assembled in 70% ethanol, the transition ended at a 6% strain. This structural phenomenon in alcoholic solution probably occurred through the transition between compacted and extended conformation of the random coil, and not between -helix and -sheets, as cycle analyses had suggested. The different transition pattern can also be explained by the different higher order organization of Ce-lamins in aqueous or alcoholic solutions, as demonstrated by introducing a point mutation in conserved residue in Ce-lamin gene that alter the structure of the Ce-lamins’ nano-fibrils. In addition, biomimicking the layered structure of silk and hair fibers by coating the Ce-lamin fiber with a hydrophobic layer enhanced fiber toughness and lead to a reversible transition between -helix and the extended conformation. This work suggests that different hierarchical structures, which are formed by specific assembly conditions, lead to diverse secondary structure transitions patterns, which in turn affect the fibers’ mechanical properties.Keywords: protein-based fibers, intermediate filaments (IF) assembly, toughness, structure-property relationships
Procedia PDF Downloads 110556 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud
Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal
Abstract:
Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid
Procedia PDF Downloads 317555 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery
Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald
Abstract:
Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography
Procedia PDF Downloads 148554 Microalgae for Plant Biostimulants on Whey and Dairy Wastewaters
Authors: Sergejs Kolesovs, Pavels Semjonovs
Abstract:
Whey and dairy wastewaters if disposed in the environment without proper treatment, cause serious environmental risks contributing to overall and particular environmental pollution and climate change. Biological treatment of wastewater is considered to be most eco-friendly approach, as compared to the chemical treatment methods. Research shows, that dairy wastewater can potentially be remediated by use of microalgae thussignificantly reducing the content of carbohydrates, P, N, K and other pollutants. Moreover, it has been shown, that use of dairy wastewaters results in higher microalgae biomass production. In recent decades microalgal biomass has entailed a big interest for its potential applications in pharmaceuticals, biomedicine, health supplementation, cosmetics, animal feed, plant protection, bioremediation and biofuels. It was shown, that lipids productivity on whey and dairy wastewater is higher as compared with standard cultivation media and occurred without the necessity of inducing specific stress conditions such as N starvation. Moreover, microalgae biomass production as usually associated with high production costs may benefit from perspective of both reasons – enhanced microalgae biomass or target substances productivity on cheap growth substrate and effective management of whey and dairy wastewaters, which issignificant for decrease of total production costs in both processes. Obviously, it became especially important when large volume and low cost industrial microalgal biomass production is anticipated for further use in agriculture of crops as plant growth stimulants, biopesticides soil fertilisers or remediating solutions. Environmental load of dairy wastewaters can be significantly decreased when microalgae are grown in coculture with other microorganisms. This enhances the utilisation of lactose, which is main C source in whey and dairy wastewaters when it is not metabolised easily by most microalgal species chosen. Our study showsthat certain microalgae strains can be used in treatment of residual sugars containing industrial wastewaters and decrease of their concentration thus approving that further extensive research on dairy wastewaters pre-treatment optionsfor effective cultivation of microalgae, carbon uptake and metabolism, strain selection and choice of coculture candidates is needed for further optimisation of the process.Keywords: microalgae, whey, dairy wastewaters, sustainability, plant biostimulants
Procedia PDF Downloads 91553 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study
Authors: Scott Snyder
Abstract:
Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.Keywords: alternative approaches to grading, grades, higher education, letter grades
Procedia PDF Downloads 95552 Examination of Calpurnia Aurea Seed Extract Activity Against Hematotoxicity and Hepatotoxicity in HAART Drug Induced Albino Wistar Rat
Authors: Haile Nega Mulata, Seifu Daniel, Umeta Melaku, Wendwesson Ergete, Natesan Gnanasekaran
Abstract:
Background: In Ethiopia, medicinal plants have been used for various human and animal diseases. In this study, we have examined the potential effect of hydroethanolic extract of Calpurnia aurea seed against hepatotoxicity and haematotoxicity induced by Highly Active Antiretroviral Therapy (HAART) drugs in Albino Wistar rats. Methods: We collected Matured dried seeds of Calpurnia aurea from northern Ethiopia (south Tigray and south Gondar) in June 2013. The powder of the dried seed sample was macerated with 70% ethanol and dried using rotavapor. We have investigated the Preliminary phytochemical tests and in-vitro antioxidant properties. Then, we induced toxicity with HAART drugs and gave the experimental animals different doses of the crude extract orally for thirty-five days. On the 35th day, the animals were fasted overnight and sacrificed by cervical dislocation. We collected the blood samples by cardiac puncture. We excised the liver and brain tissues for further histopathological studies. Subsequently, we analysed serum levels of the liver enzymes- Alanine Aminotransferase, Aspartate Aminotransferase, Alkaline Phosphatase, Total Bilirubin, and Serum Albumin, using commercial kits in Cobas Integra 400 Plus Roche Analyzer Germany. We have also assessed the haematological profile using an automated haematology Analyser (Sysmex KX-2IN). Results: A significant (P<0.05) decrease in serum enzymes (ALT and AST) and total bilirubin were observed in groups that received the highest dose (300mg/kg) of the seed extract. And significant (P<0.05) elevation of total red blood cell count, haemoglobin, and hematocrit percentage was observed in the groups that received the seed extract compared to the HAART-treated groups. The WBC count mean values showed a statistically significant increase (p<0.05) in groups that received HAART and 200 and 300mg/kg extract, respectively. The histopathological observations also showed that the oral administration of varying doses of the crude extract of the seed reversed to a normal state. Conclusion: The hydroethanolic extract of the Calpurnia aurea seed lowered the hepatotoxicity and haematotoxicity in a dose-dependent manner. The antioxidant properties of the Calpurnia aurea seed extract may have possible protective effects against the drug's toxicity.Keywords: calpurnia aurea, hepatotoxicity, haematotoxicity, antioxidant, histopathology, HAART
Procedia PDF Downloads 99551 Gassing Tendency of Natural Ester Based Transformer oils: Low Alkane Generation in Stray Gassing Behaviour
Authors: Thummalapalli CSM Gupta, Banti Sidhiwala
Abstract:
Mineral oils of naphthenic and paraffinic type have been traditionally been used as insulating liquids in the transformer applications to protect the solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different type of gases representing various types of faults due to components or operating conditions effectively. While large amount of data base has been generated in the industry on dissolved gas analysis for mineral oil based transformer oils and various models for predicting the fault and analysis, oil specifications and standards have also been modified to include stray gassing limits which cover the low temperature faults and becomes an effective preventative maintenance tool that can benefit greatly to know the reasons for the breakdown of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the Stray gases evolution like hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these type esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a natural ester based transformer oil which shows very levels of stray gassing by standard test methods show much lower values compared to the products available currently and experimental results on various test conditions and the underlying mechanism explained.Keywords: biodegadability, fire point, dissolved gassing analysis, stray gassing
Procedia PDF Downloads 94550 Well-Defined Polypeptides: Synthesis and Selective Attachment of Poly(ethylene glycol) Functionalities
Authors: Cristina Lavilla, Andreas Heise
Abstract:
The synthesis of sequence-controlled polymers has received increasing attention in the last years. Well-defined polyacrylates, polyacrylamides and styrene-maleimide copolymers have been synthesized by sequential or kinetic addition of comonomers. However this approach has not yet been introduced to the synthesis of polypeptides, which are in fact polymers developed by nature in a sequence-controlled way. Polypeptides are natural materials that possess the ability to self-assemble into complex and highly ordered structures. Their folding and properties arise from precisely controlled sequences and compositions in their constituent amino acid monomers. So far, solid-phase peptide synthesis is the only technique that allows preparing short peptide sequences with excellent sequence control, but also requires extensive protection/deprotection steps and it is a difficult technique to scale-up. A new strategy towards sequence control in the synthesis of polypeptides is introduced, based on the sequential addition of α-amino acid-N-carboxyanhydrides (NCAs). The living ring-opening process is conducted to full conversion and no purification or deprotection is needed before addition of a new amino acid. The length of every block is predefined by the NCA:initiator ratio in every step. This method yields polypeptides with a specific sequence and controlled molecular weights. A series of polypeptides with varying block sequences have been synthesized with the aim to identify structure-property relationships. All of them are able to adopt secondary structures similar to natural polypeptides, and display properties in the solid state and in solution that are characteristic of the primary structure. By design the prepared polypeptides allow selective modification of individual block sequences, which has been exploited to introduce functionalities in defined positions along the polypeptide chain. Poly(ethylene glycol)(PEG) was the functionality chosen, as it is known to favor hydrophilicity and also yield thermoresponsive materials. After PEGylation, hydrophilicity of the polypeptides is enhanced, and their thermal response in H2O has been studied. Noteworthy differences in the behavior of the polypeptides having different sequences have been found. Circular dichroism measurements confirmed that the α-helical conformation is stable over the examined temperature range (5-90 °C). It is concluded that PEG units are the main responsible of the changes in H-bonding interactions with H2O upon variation of temperature, and the position of these functional units along the backbone is a factor of utmost importance in the resulting properties of the α-helical polypeptides.Keywords: α-amino acid N-carboxyanhydrides, multiblock copolymers, poly(ethylene glycol), polypeptides, ring-opening polymerization, sequence control
Procedia PDF Downloads 199549 Moving Target Defense against Various Attack Models in Time Sensitive Networks
Authors: Johannes Günther
Abstract:
Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.Keywords: network security, time sensitive networking, moving target defense, cyber security
Procedia PDF Downloads 72548 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 156547 A Qualitative Study Exploring Factors Influencing the Uptake of and Engagement with Health and Wellbeing Smartphone Apps
Authors: D. Szinay, O. Perski, A. Jones, T. Chadborn, J. Brown, F. Naughton
Abstract:
Background: The uptake of health and wellbeing smartphone apps is largely influenced by popularity indicators (e.g., rankings), rather than evidence-based content. Rapid disengagement is common. This study aims to explore how and why potential users 1) select and 2) engage with such apps, and 3) how increased engagement could be promoted. Methods: Semi-structured interviews and a think-aloud approach were used to allow participants to verbalise their thoughts whilst searching for a health or wellbeing app online, followed by a guided search in the UK National Health Service (NHS) 'Apps Library' and Public Health England’s (PHE) 'One You' website. Recruitment took place between June and August 2019. Adults interested in using an app for behaviour change were recruited through social media. Data were analysed using the framework approach. The analysis is both inductive and deductive, with the coding framework being informed by the Theoretical Domains Framework. The results are further mapped onto the COM-B (Capability, Opportunity, Motivation - Behaviour) model. The study protocol is registered on the Open Science Framework (https://osf.io/jrkd3/). Results: The following targets were identified as playing a key role in increasing the uptake of and engagement with health and wellbeing apps: 1) psychological capability (e.g., reduced cognitive load); 2) physical opportunity (e.g., low financial cost); 3) social opportunity (e.g., embedded social media); 4) automatic motivation (e.g., positive feedback). Participants believed that the promotion of evidence-based apps on NHS-related websites could be enhanced through active promotion on social media, adverts on the internet, and in general practitioner practices. Future Implications: These results can inform the development of interventions aiming to promote the uptake of and engagement with evidence-based health and wellbeing apps, a priority within the UK NHS Long Term Plan ('digital first'). The targets identified across the COM-B domains could help organisations that provide platforms for such apps to increase impact through better selection of apps.Keywords: behaviour change, COM-B model, digital health, mhealth
Procedia PDF Downloads 165546 Understanding the Effects of Lamina Stacking Sequence on Structural Response of Composite Laminates
Authors: Awlad Hossain
Abstract:
Structural weight reduction with improved functionality is one of the targeted desires of engineers, which drives materials and structures to be lighter. One way to achieve this objective is through the replacement of metallic structures with composites. The main advantages of composite materials are to be lightweight and to offer high specific strength and stiffness. Composite materials can be classified in various ways based on the fiber types and fiber orientations. Fiber reinforced composite laminates are prepared by stacking single sheet of continuous fibers impregnated with resin in different orientation to get the desired strength and stiffness. This research aims to understand the effects of Lamina Stacking Sequence (LSS) on the structural response of a symmetric composite laminate, defined by [0/60/-60]s. The Lamina Stacking Sequence (LSS) represents how the layers are stacked together in a composite laminate. The [0/60/-60]s laminate represents a composite plate consists of 6 layers of fibers, which are stacked at 0, 60, -60, -60, 60 and 0 degree orientations. This laminate is also called symmetric (defined by subscript s) as it consists of same material and having identical fiber orientations above and below the mid-plane. Therefore, the [0/60/-60]s, [0/-60/60]s, [60/-60/0]s, [-60/60/0]s, [60/0/-60]s, and [-60/0/60]s represent the same laminate but with different LSS. In this research, the effects of LSS on laminate in-plane and bending moduli was investigated first. The laminate moduli dictate the in-plane and bending deformations upon loading. This research also provided all the setup and techniques for measuring the in-plane and bending moduli, as well as how the stress distribution was assessed. Then, the laminate was subjected to in-plane force load and bending moment. The strain and stress distribution at each ply for different LSS was investigated using the concepts of Macro-Mechanics. Finally, several numerical simulations were conducted using the Finite Element Analysis (FEA) software ANSYS to investigate the effects of LSS on deformations and stress distribution. The FEA results were also compared to the Macro-Mechanics solutions obtained by MATLAB. The outcome of this research helps composite users to determine the optimum LSS requires to minimize the overall deformation and stresses. It would be beneficial to predict the structural response of composite laminates analytically and/or numerically before in-house fabrication.Keywords: composite, lamina, laminate, lamina stacking sequence, laminate moduli, laminate strength
Procedia PDF Downloads 6545 ESL Material Evaluation: The Missing Link in Nigerian Classrooms
Authors: Abdulkabir Abdullahi
Abstract:
The paper is a pre-use evaluation of grammar activities in three primary English course books (two of which are international primary English course books and the other a popular Nigerian primary English course book). The titles are - Cambridge Global English, Collins International Primary English, and Nigeria Primary English – Primary English. Grammar points and grammar activities in the three-course books were identified, grouped, and evaluated. The grammar activity which was most common in the course books, simple past tense, was chosen for evaluation, and the units which present simple past tense activities were selected to evaluate the extent to which the treatment of simple past tense in each of the course books help the young learners of English as a second language in Nigeria, aged 8 – 11, level A1 to A2, who lack the basic grammatical knowledge, to know grammar/communicate effectively. A bespoke checklist was devised, through the modification of existing checklists for the purpose of the evaluation, to evaluate the extent to which the grammar activities promote the communicative effectiveness of Nigerian learners of English as a second language. The results of the evaluation and the analysis of the data reveal that the treatment of grammar, especially the treatment of the simple past tense, is evidently insufficient. While Cambridge Global English’s, and Collins International Primary English’s treatment of grammar, the simple past tense, is underpinned by state-of-the-art theories of learning, language learning theories, second language learning principles, second language curriculum-syllabus design principles, grammar learning and teaching theories, the grammar load is insignificantly low, and the grammar tasks do not promote creative grammar practice sufficiently. Nigeria Primary English – Primary English, on the other hand, treats grammar, the simple past tense, in the old-fashioned direct way. The book does not favour the communicative language teaching approach; no opportunity for learners to notice and discover grammar rules for themselves, and the book lacks the potency to promote creative grammar practice. The research and its findings, therefore, underscore the need to improve grammar contents and increase grammar activity types which engage learners effectively and promote sufficient creative grammar practice in EFL and ESL material design and development.Keywords: evaluation, activity, second language, activity-types, creative grammar practice
Procedia PDF Downloads 79544 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence
Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay
Abstract:
Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality
Procedia PDF Downloads 182