Search results for: auto tuning
114 Empirical Investigation of Barriers to Industrial Energy Conservation Measures in the Manufacturing Small and Medium Enterprises (SME's) of Pakistan
Authors: Muhammad Tahir Hassan, Stas Burek, Muhammad Asif, Mohamed Emad
Abstract:
Industrial sector in Pakistan accounts for 25% of total energy consumption in the country. The performance of this sector has been severely affected due to the adverse effect of current energy crises in the country. Energy conservation potentials of Pakistan’s industrial sectors through energy management can save wasted energy which would ultimately leads to economic and environmental benefits. However due to lack of financial incentives of energy efficiency and absence of energy benchmarking within same industrial sectors are some of the main challenges in the implementation of energy management. In Pakistan, this area has not been adequately explored, and there is a lack of focus on the need for industrial energy efficiency and proper management. The main objective of this research is to evaluate the current energy management performance of Pakistani industrial sector and empirical investigation of the existence of various barriers to industrial energy efficiency. Data was collected from the respondents of 192 small and medium-sized enterprises (SME’s) of Pakistan i.e. foundries, textile, plastic industries, light engineering, auto and spare parts and ceramic manufacturers and analysed using Statistical Package for the Social Sciences (SPSS) software. Current energy management performance of manufacturing SME’s in Pakistan has been evaluated by employing two significant indicators, ‘Energy Management Matrix’ and ‘pay-off criteria’, with modified approach. Using the energy management matrix, energy management profiles of overall industry and the individual sectors have been drawn to assess the energy management performance and identify the weak and strong areas as well. Results reveal that, energy management practices in overall surveyed industries are at very low level. Energy management profiles drawn against each sector suggest that performance of textile sector is better among all the surveyed manufacturing SME’s. The empirical barriers to industrial energy efficiency have also been ranked according to the overall responses. The results further reveal that there is a significant relationship exists among the industrial size, sector type and nature of barriers to industrial energy efficiency for the manufacturing SME’s in Pakistan. The findings of this study may help the industries and policy makers in Pakistan to formulate a sustainable energy policy to support industrial energy efficiency keeping in view the actual existing energy efficiency scenario in the industrial sector.Keywords: barriers, energy conservation, energy management profile, environment, manufacturing SME's of Pakistan
Procedia PDF Downloads 290113 A Simulated Evaluation of Model Predictive Control
Authors: Ahmed AlNouss, Salim Ahmed
Abstract:
Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)
Procedia PDF Downloads 407112 Active Power Filters and their Smart Grid Integration - Applications for Smart Cities
Authors: Pedro Esteban
Abstract:
Most installations nowadays are exposed to many power quality problems, and they also face numerous challenges to comply with grid code and energy efficiency requirements. The reason behind this is that they are not designed to support nonlinear, non-balanced, and variable loads and generators that make up a large percentage of modern electric power systems. These problems and challenges become especially critical when designing green buildings and smart cities. These problems and challenges are caused by equipment that can be typically found in these installations like variable speed drives (VSD), transformers, lighting, battery chargers, double-conversion UPS (uninterruptible power supply) systems, highly dynamic loads, single-phase loads, fossil fuel generators and renewable generation sources, to name a few. Moreover, events like capacitor switching (from existing capacitor banks or passive harmonic filters), auto-reclose operations of transmission and distribution lines, or the starting of large motors also contribute to these problems and challenges. Active power filters (APF) are one of the fastest-growing power electronics technologies for solving power quality problems and meeting grid code and energy efficiency requirements for a wide range of segments and applications. They are a high performance, flexible, compact, modular, and cost-effective type of power electronics solutions that provide an instantaneous and effective response in low or high voltage electric power systems. They enable longer equipment lifetime, higher process reliability, improved power system capacity and stability, and reduced energy losses, complying with most demanding power quality and energy efficiency standards and grid codes. There can be found several types of active power filters, including active harmonic filters (AHF), static var generators (SVG), active load balancers (ALB), hybrid var compensators (HVC), and low harmonic drives (LHD) nowadays. All these devices can be used in applications in Smart Cities bringing several technical and economic benefits.Keywords: power quality improvement, energy efficiency, grid code compliance, green buildings, smart cities
Procedia PDF Downloads 112111 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 90110 Code Mixing and Code-Switching Patterns in Kannada-English Bilingual Children and Adults Who Stutter
Authors: Vasupradaa Manivannan, Santosh Maruthy
Abstract:
Background/Aims: Preliminary evidence suggests that code-switching and code-mixing may act as one of the voluntary coping behavior to avoid the stuttering characteristics in children and adults; however, less is known about the types and patterns of code-mixing (CM) and code-switching (CS). Further, it is not known how it is different between children to adults who stutter. This study aimed to identify and compare the CM and CS patterns between Kannada-English bilingual children and adults who stutter. Method: A standard group comparison was made between five children who stutter (CWS) in the age range of 9-13 years and five adults who stutter (AWS) in the age range of 20-25 years. The participants who are proficient in Kannada (first language- L1) and English (second language- L2) were considered for the study. There were two tasks given to both the groups, a) General conversation (GC) with 10 random questions, b) Narration task (NAR) (Story / General Topic, for example., A Memorable Life Event) in three different conditions {Mono Kannada (MK), Mono English (ME), and Bilingual (BIL) Condition}. The children and adults were assessed online (via Zoom session) with a high-quality internet connection. The audio and video samples of the full assessment session were auto-recorded and manually transcribed. The recorded samples were analyzed for the percentage of dysfluencies using SSI-4 and CM, and CS exhibited in each participant using Matrix Language Frame (MLF) model parameters. The obtained data were analyzed using the Statistical Package for the Social Sciences (SPSS) software package (Version 20.0). Results: The mean, median, and standard deviation values were obtained for the percentage of dysfluencies (%SS) and frequency of CM and CS in Kannada-English bilingual children and adults who stutter for various parameters obtained through the MLF model. The inferential results indicated that %SS significantly varied between population (AWS vs CWS), languages (L1 vs L2), and tasks (GC vs NAR) but not across free (BIL) and bound (MK, ME) conditions. It was also found that the frequency of CM and CS patterns varies between CWS and AWS. The AWS had a lesser %SS but greater use of CS patterns than CWS, which is due to their excessive coping skills. The language mixing patterns were more observed in L1 than L2, and it was significant in most of the MLF parameters. However, there was a significantly higher (P<0.05) %SS in L2 than L1. The CS and CS patterns were more in conditions 1 and 3 than 2, which may be due to the higher proficiency of L2 than L1. Conclusion: The findings highlight the importance of assessing the CM and CS behaviors, their patterns, and the frequency of CM and CS between CWS and AWS on MLF parameters in two different tasks across three conditions. The results help us to understand CM and CS strategies in bilingual persons who stutter.Keywords: bilinguals, code mixing, code switching, stuttering
Procedia PDF Downloads 78109 Engineering a Tumor Extracellular Matrix Towards an in vivo Mimicking 3D Tumor Microenvironment
Authors: Anna Cameron, Chunxia Zhao, Haofei Wang, Yun Liu, Guang Ze Yang
Abstract:
Since the first publication in 1775, cancer research has built a comprehensive understanding of how cellular components of the tumor niche promote disease development. However, only within the last decade has research begun to establish the impact of non-cellular components of the niche, particularly the extracellular matrix (ECM). The ECM, a three-dimensional scaffold that sustains the tumor microenvironment, plays a crucial role in disease progression. Cancer cells actively deregulate and remodel the ECM to establish a tumor-promoting environment. Recent work has highlighted the need to further our understanding of the complexity of this cancer-ECM relationship. In vitro models use hydrogels to mimic the ECM, as hydrogel matrices offer biological compatibility and stability needed for long term cell culture. However, natural hydrogels are being used in these models verbatim, without tuning their biophysical characteristics to achieve pathophysiological relevance, thus limiting their broad use within cancer research. The biophysical attributes of these gels dictate cancer cell proliferation, invasion, metastasis, and therapeutic response. Evaluating the three most widely used natural hydrogels, Matrigel, collagen, and agarose gel, the permeability, stiffness, and pore-size of each gel were measured and compared to the in vivo environment. The pore size of all three gels fell between 0.5-6 µm, which coincides with the 0.1-5 µm in vivo pore size found in the literature. However, the stiffness for hydrogels able to support cell culture ranged between 0.05 and 0.3 kPa, which falls outside the range of 0.3-20,000 kPa reported in the literature for an in vivo ECM. Permeability was ~100x greater than in vivo measurements, due in large part to the lack of cellular components which impede permeation. Though, these measurements prove important when assessing therapeutic particle delivery, as the ECM permeability decreased with increasing particle size, with 100 nm particles exhibiting a fifth of the permeability of 10 nm particles. This work explores ways of adjusting the biophysical characteristics of hydrogels by changing protein concentration and the trade-off, which occurs due to the interdependence of these factors. The global aim of this work is to produce a more pathophysiologically relevant model for each tumor type.Keywords: cancer, extracellular matrix, hydrogel, microfluidic
Procedia PDF Downloads 91108 Exploration of Probiotics and Anti-Microbial Agents in Fermented Milk from Pakistani Camel spp. Breeds
Authors: Deeba N. Baig, Ateeqa Ijaz, Saloome Rafiq
Abstract:
Camel is a religious and culturally significant animal in Asian and African regions. In Pakistan Dromedary and Bactrian are common camel breeds. Other than the transportation use, it is a pivotal source of milk and meat. The quality of its milk and meat is predominantly dependent on the geographical location and variety of vegetation available for the diet. Camel milk (CM) is highly nutritious because of its reduced cholesterol and sugar contents along with enhanced minerals and vitamins level. The absence of beta-lactoglobulin (like human milk), makes CM a safer alternative for infants and children having Cow Milk Allergy (CMA). In addition to this, it has a unique probiotic profile both in raw and fermented form. Number of Lactic acid bacteria (LAB) including lactococcus, lactobacillus, enterococcus, streptococcus, weissella, pediococcus and many other bacteria have been detected. From these LAB Lactobacilli, Bifidobacterium and Enterococcus are widely used commercially for fermentation purpose. CM has high therapeutic value as its effectiveness is known against various ailments like fever, arthritis, asthma, gastritis, hepatitis, Jaundice, constipation, postpartum care of women, anti-venom, dropsy etc. It also has anti-diabetic, anti-microbial, antitumor potential along with its robust efficacy in the treatment of auto-immune disorders. Recently, the role of CM has been explored in brain-gut axis for the therapeutics of neurodevelopmental disorders. In this connection, a lot of grey area was available to explore the probiotics and therapeutics latent in the CM available in Pakistan. Thus, current study was designed to explore the predominant probiotic flora and antimicrobial potential of CM from different local breeds of Pakistan. The probiotics have been identified through biochemical, physiological and ribo-typing methods. In addition to this, bacteriocins (antimicrobial-agents) were screened through PCR-based approach. Results of this study revealed that CM from different breeds of camel depicted a number of similar probiotic candidates along with the range of limited variability. However, the nucleotide sequence analysis of selected anti-listerial bacteriocins exposed least variability. As a conclusion, the CM has sufficient probiotic availability and significant anti-microbial potential.Keywords: bacteriocins, camel milk, probiotics potential, therapeutics
Procedia PDF Downloads 133107 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI
Authors: Brennan Lodge
Abstract:
Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies
Procedia PDF Downloads 95106 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator
Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya
Abstract:
Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.Keywords: DTL, focusing, PMQ, proton, rate earth magnets
Procedia PDF Downloads 472105 Investigating the Relationship between Bioethics and Sports
Authors: Franco Bruno Castaldo
Abstract:
Aim: The term bioethics is a term coined by VanPotter R ., who in 1970 thought of a discipline, capable of contributing to a better quality of human life and the cosmos. At first he intended bioethics as a wisdom capable of creating a bridge between bios and ethos and between bio-experimental science and ethical-anthropological sciences.Similarly, the modern sport is presented as a polysemic phenomenon, multidisciplinary, pluris value. From the beginning, the sport is included in the discussion of bioethical problems with doping. Today, the ethical problems of the sport are not only ascribable to doping, the medicalization of society, Techniques for enhancement, violence, Fraud, corruption, even the acceptance of anthropological transhumanist theories. Our purpose is to shed light on these issues so that there is a discernment, a fine-tuning also in educational programs, for the protection of all the sport from a scientist adrift, which would lead to an imbalance of values. Method: Reading, textual and documentary analysis, evaluation of critical examples. Results: Harold VanderZwaag, (1929-2011) in ancient times, asked: how many athletic directors have read works of sport philosophy or humanities? Along with E.A. Zeigler (North American Society for Sport Management) are recognized as pioneers of educational Sport Management. Comes the need to leave the confines of a scientific field, In order to deal with other than itself. Conclusion: The quantitative sciences attracts more funds than qualitative ones, the philosopher M. Nussbaum, has relaunched the idea that the training of students will have to be more disinterested than utilitarian, Offering arguments against the choice of anti-classical, analyzing and comparing different educational systems. schools, universities must assign a prominent place in the program of study to the humanistic, literary and artistic subjects, cultivating a participation that can activate and improve the ability to see the world through the eyes of another person. In order to form citizens who play their role in society, science and technology alone are not enough, we need disciplines that are able to cultivate critical thinking, respect for diversity, solidarity, the judgment, the freedom of expression. According to A. Camelli, the humanities faculties prepare for that life-long learning, which will characterize tomorrow's jobs.Keywords: bioethics, management, sport, transhumanist, medicalization
Procedia PDF Downloads 513104 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 196103 Hydrogeochemical Investigation of Lead-Zinc Deposits in Oshiri and Ishiagu Areas, South Eastern Nigeria
Authors: Christian Ogubuchi Ede, Moses Oghenenyoreme Eyankware
Abstract:
This study assessed the concentration of heavy metals (HMs) in soil, rock, mine dump pile, and water from Oshiri and Ishiagu areas of Ebonyi State. Investigations on mobile fraction equally evaluated the geochemical condition of different HM using UV spectrophotometer for Mineralized and unmineralized rocks, dumps, and soil, while AAS was used in determining the geochemical nature of the water system. Analysis revealed very high pollution of Cd mostly in Ishiagu (Ihetutu and Amaonye) active mine zones and with subordinates enrichments of Pb, Cu, As, and Zn in Amagu and Umungbala. Oshiri recorded sparingly moderate to high contamination of Cd and Mn but out rightly high anthropogenic input. Observation showed that most of the contamination conditions were unbearable while at the control but decrease with increasing distance from the mine vicinity. The potential heavy metal risk of the environments was evaluated using the risk factors such as enrichment factor, index of Geoacumulation, Contamination Factor, and Effect Range Median. Cadmium and Zn showed moderate to extreme contamination using Geoaccumulation Index (Igeo) while Pb, Cd, and As indicated moderate to strong pollution using the Effect Range Median. Results, when compared with the allowable limits and standards, showed the concentration of the metals in the following order Cd>Zn>Pb>As>Cu>Ni (rocks), Cd>As>Pb>Zn>Cu>Ni (soil) while Cd>Zn>As>Pb> Cu (for mine dump pile. High concentrations of Zn and As were recorded more in mine pond and salt line/drain channels along active mine zones, it heightened its threat during the rainy period as it settles into river course, living behind full-scale contaminations to inhabitants depending on it for domestic uses. Pb and Cu with moderate pollution were recorded in surface/stream water source as its mobility were relatively low. Results from Ishiagu Crush rock sites and Fedeco metallurgical and auto workshop where groundwater contamination was seen infiltrating some of the wells points gave rise to values that were 4 times high than the allowable limits. Some of these metal concentrations according to WHO (2015) if left unmitigated pose adverse effects to the soil and human community.Keywords: water, geo-accumulation, heavy metals, mine and Nigeria.
Procedia PDF Downloads 170102 Opto-Thermal Frequency Modulation of Phase Change Micro-Electro-Mechanical Systems
Authors: Syed A. Bukhari, Ankur Goswmai, Dale Hume, Thomas Thundat
Abstract:
Here we demonstrate mechanical detection of photo-induced Insulator to metal transition (MIT) in ultra-thin vanadium dioxide (VO₂) micro strings by using < 100 µW of optical power. Highly focused laser beam heated the string locally resulting in through plane and along axial heat diffusion. Localized temperature increase can cause temperature rise > 60 ºC. The heated region of VO₂ can transform from insulating (monoclinic) to conducting (rutile) phase leading to lattice compressions and stiffness increase in the resonator. The mechanical frequency of the resonator can be tuned by changing optical power and wavelength. The first mode resonance frequency was tuned in three different ways. A decrease in frequency below a critical optical power, a large increase between 50-120 µW followed by a large decrease in frequency for optical powers greater than 120 µW. The dynamic mechanical response was studied as a function of incident optical power and gas pressure. The resonance frequency and amplitude of vibration were found to be decreased with increasing laser power from 25-38 µW and increased by1-2 % when the laser power was further increased to 52 µW. The transition in films was induced and detected by a single pump and probe source and by employing external optical sources of different wavelengths. This trend in dynamic parameters of the strings can be co-related with reversible Insulator to metal transition in VO₂ films which creates change in density of the material and hence the overall stiffness of the strings leading to changes in string dynamics. The increase in frequency at a particular optical power manifests a transition to a more ordered metallic phase which tensile stress onto the string. The decrease in frequency at higher optical powers can be correlated with poor phonon thermal conductivity of VO₂ in conducting phase. Poor thermal conductivity of VO₂ can force in-plane penetration of heat causing the underneath SiN supporting VO₂ which can result as a decrease in resonance frequency. This noninvasive, non-contact laser-based excitation and detection of Insulator to metal transition using micro strings resonators at room temperature and with laser power in few µWs is important for low power electronics, and optical switching applications.Keywords: thermal conductivity, vanadium dioxide, MEMS, frequency tuning
Procedia PDF Downloads 120101 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)
Authors: Ahmed E. Hodaib, Mohamed A. Hashem
Abstract:
In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization
Procedia PDF Downloads 255100 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 7499 The Magnitude and Associated Factors of Immune Hemolytic Anemia among Human Immuno Deficiency Virus Infected Adults Attending University of Gondar Comprehensive Specialized Hospital North West Ethiopia 2021 GC, Cross Sectional Study Design
Authors: Samul Sahile Kebede
Abstract:
Back ground: -Immune hemolytic anemia commonly affects human immune deficiency, infected individuals. Among anemic HIV patients in Africa, the burden of IHA due to autoantibody was ranged from 2.34 to 3.06 due to the drug was 43.4%. IHA due to autoimmune is potentially a fatal complication of HIV, which accompanies the greatest percent from acquired hemolytic anemia. Objective: -The main aim of this study was to determine the magnitude and associated factors of immune hemolytic anemia among human immuno deficiency virus infected adults at the university of Gondar comprehensive specialized hospital north west Ethiopia from March to April 2021. Methods: - An institution-based cross-sectional study was conducted on 358 human immunodeficiency virus-infected adults selected by systematic random sampling at the University of Gondar comprehensive specialized hospital from March to April 2021. Data for socio-demography, dietary and clinical data were collected by structured pretested questionnaire. Five ml of venous blood was drawn from each participant and analyzed by Unicel DHX 800 hematology analyzer, blood film examination, and antihuman globulin test were performed to the diagnosis of immune hemolytic anemia. Data was entered into Epidata version 4.6 and analyzed by STATA version 14. Descriptive statistics were computed and firth penalized logistic regression was used to identify predictors. P value less than 0.005 interpreted as significant. Result; - The overall prevalence of immune hemolytic anemia was 2.8 % (10 of 358 participants). Of these, 5 were males, and 7 were in the 31 to 50 year age group. Among individuals with immune hemolytic anemia, 40 % mild and 60 % moderate anemia. The factors that showed association were family history of anemia (AOR 8.30 at 95% CI 1.56, 44.12), not eating meat (AOR 7.39 at 95% CI 1.25, 45.0), and high viral load 6.94 at 95% CI (1.13, 42.6). Conclusion and recommendation; Immune hemolytic anemia is less frequent condition in human immunodeficiency virus infected adults, and moderate anemia was common in this population. The prevalence was increased with a high viral load, a family history of anemia, and not eating meat. In these patients, early detection and treatment of immune hemolytic anemia is necessary.Keywords: anemia, hemolytic, immune, auto immune, HIV/AIDS
Procedia PDF Downloads 10698 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis
Authors: Syed Asif Hassan, Tabrej Khan
Abstract:
Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein
Procedia PDF Downloads 25697 Processes and Application of Casting Simulation and Its Software’s
Authors: Surinder Pal, Ajay Gupta, Johny Khajuria
Abstract:
Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes
Procedia PDF Downloads 47596 Fillet Chemical Composition of Sharpsnout Seabream (Diplodus puntazzo) from Wild and Cage-Cultured Conditions
Authors: Oğuz Taşbozan, Celal Erbaş, Şefik Surhan Tabakoğlu, Mahmut Ali Gökçe
Abstract:
Polyunsaturated fatty acids (PUFAs) and particularly the levels and ratios of ω-3 and ω-6 fatty acids are important for biological functions in humans and recognized as essential components of human diet. According to the terms of many different points of view, the nutritional composition of fish in culture conditions and caught from wild are wondered by the consumers. Therefore the aim of this study was to investigate the chemical composition of cage-cultured and wild sharpsnout seabream which has been preferred by the consumers as an economical important fish species in Turkey. The fish were caught from wild and obtained from cage-cultured commercial companies. Eight fish were obtained for each group, and their average weights of the samples were 245.8±13.5 g for cultured, 149.4±13.3 g for wild samples. All samples were stored in freezer (-18 °C) and analyses were carried out in triplicates, using homogenized boneless fish fillets. Proximate compositions (protein, ash, moisture and lipid) were determined. The fatty acid composition was analyzed by a GC Clarous 500 with auto sampler (Perkin–Elmer, USA). Proximate compositions of cage-cultured and wild samples of sharpsnout seabream were found statistical differences in terms of proximate composition between the groups. The saturated fatty acid (SFA), monounsaturated fatty acid (MUFA) and PUFA amounts of cultured and wild sharpsnout seabream were significantly different. ω3/ω6 ratio was higher in the cultured group. Especially in protein level and lipid level of cultured samples was significantly higher than wild counterparts. One of the reasons for this, cultured species exposed to continuous feeding. This situation had a direct effect on their body lipid content. The fatty acid composition of fish differs depending on a variety of factors including species, diet, environmental factors and whether they are farmed or wild. The higher levels of MUFA in the cultured fish may be explained with the high content of monoenoic fatty acids in the feed of cultured fish as in some other species. The ω3/ω6 ratio is a good index for comparing the relative nutritional value of fish oils. In our study, the cultured sharpsnout seabream appears to be better nutritious in terms of ω3/ω6. Acknowledgement: This work was supported by the Scientific Research Project Unit of the University of Cukurova, Turkey under grant no FBA-2016-5780.Keywords: Diplodus puntazo, cage cultured, PUFA, fatty acid
Procedia PDF Downloads 26695 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations
Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer
Abstract:
In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control
Procedia PDF Downloads 13894 Flexible Coupling between Gearbox and Pump (High Speed Machine)
Authors: Naif Mohsen Alharbi
Abstract:
This paper present failure occurred on flexible coupling installed at oil anf gas operation. Also it presents maintenance ideas implemented on the flexible coupling installed to transmit high torque from gearbox to pump. Basically, the machine train is including steam turbine which drives the pump and there is gearbox located in between for speed reduction. investigation are identifying the root causes, solving and developing the technology designs or bad actor. This report provides the study intentionally for continues operation optimization, utilize the advanced opportunity and implement a improvement. Objective: The main objectives of the investigation are identifying the root causes, solving and developing the technology designs or bad actor. Ultimately, fulfilling the operation productivity, also ensuring better technology, quality and design by solutions. This report provides the study intentionally for continues operation optimization, utilize the advanced opportunity and implemet improvement. Method: The method used in this project was a very focused root cause analysis procedure that incorporated engineering analysis and measurements. The analysis method extensively covers the measuring of the complete coupling dimensions. Including the membranes thickness, hubs, bore diameter and total length, dismantle flexible coupling to diagnose how deep the coupling has been affected. Also, defining failure modes, so that the causes could be identified and verified. Moreover, Vibration analysis and metallurgy test. Lastly applying several solutions by advanced tools (will be mentioned in detail). Results and observation: Design capacity: Coupling capacity is an inadequate to fulfil 100% of operating conditions. Therefore, design modification of service factor to be at least 2.07 is crucial to address this issue and prevent recurrence of similar scenario, especially for the new upgrading project. Discharge fluctuation: High torque flexible coupling encountered during the operation. Therefore, discharge valve behaviour, tuning, set point and general conditions revaluated and modified subsequently, it can be used as baseline for upcoming Coupling design project. Metallurgy test: Material of flexible coupling membrane (discs) tested at the lab, for a detailed metallurgical investigation, better material grade has been selected for our operating conditions,Keywords: high speed machine, reliabilty, flexible coupling, rotating equipment
Procedia PDF Downloads 6893 Social Factors and Suicide Risk in Modern Russia
Authors: Maria Cherepanova, Svetlana Maximova
Abstract:
Background And Aims: Suicide is among ten most common causes of death of the working-age population in the world. According to the WHO forecasts, by 2025 suicide will be the third leading cause of death, after cardiovascular diseases and cancer. In 2019, the global suicide rate in the world was 10,5 per 100,000 people. In Russia, the average figure was 11.6. However, in some depressed regions of Russia, such as Buryatia and Altai, it reaches 35.3. The aim of this study was to develop models based on the regional factors of social well-being deprivation that provoke the suicidal risk of various age groups of Russian population. We also investigated suicidal risk prevention in modern Russia, analyzed its efficacy, and developed recommendations for suicidal risk prevention improvement. Methods: In this study, we analyzed the data from sociological surveys from six regions of Russia. Totally we interviewed 4200 people, the age of the respondents was from 16 to 70 years. The results were subjected to factorial and regression analyzes. Results: The results of our study indicate that young people are especially socially vulnerable, which result in ineffective patterns of self-preservation behavior and increase the risk of suicide. That is due to lack of anti-suicidal barriers formation; low importance of vital values; the difficulty or impossibility to achieve basic needs; low satisfaction with family and professional life; and decrease in personal unconditional significance. The suicidal risk of the middle-aged population is due to a decrease in social well-being in the main aspects of life, which determines low satisfaction, decrease in ontological security, and the prevalence of auto-aggressive deviations. The suicidal risk of the elderly population is due to increased factors of social exclusion which result in narrowing the social space and limiting the richness of life. Conclusions: The existing system for lowering suicide risk in modern Russia is predominantly oriented to a medical treatment, which provides only intervention to people who already committed suicide, that significantly limits its preventive effectiveness and social control of this deviation. The national strategy for suicide risk reduction in modern Russian society should combine medical and social activities, designed to minimize possible situations resulting to suicide. The strategy for elimination of suicidal risk should include a systematic and significant improvement of the social well-being of the population and aim at overcoming the basic aspects of social disadvantages such as poverty, unemployment as well as implementing innovative mental health improvement, developing life-saving behavior that will help to counter suicides in Russia.Keywords: social factors, suicide, prevention, Russia
Procedia PDF Downloads 16792 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 10691 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa
Authors: Dimakatso Machetele, Kowiyou Yessoufou
Abstract:
Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.Keywords: road accidents, South Africa, statistical modelling, trends
Procedia PDF Downloads 16190 Media Representations of Gender-Intersectional Analysis of Impact/Influence on Collective Consciousness and Perceptions of Feminism, Gender, and Gender Equality: Evidence from Cultural/Media Sources in Nigeria
Authors: Olatawura O. Ladipo-Ajayi
Abstract:
The concept of gender equality is not new, nor are the efforts and movements toward achieving this concept. The idea of gender equality originates from the early feminist movements of the 1880s and its subsequent waves, all fighting to promote gender rights and equality focused on varying aspects and groups. Nonetheless, the progress and achievement of gender equality are not progressing at similar rates across the world and groups. This uneven progress is often due to varying social, cultural, political, and economic factors- some of which underpin intersectional identities and influence the perceptions of gender and associated gender roles that create gender inequality. In assessing perceptions of gender and assigned roles or expectations that cause inequalities, intersectionality provides a framework to interrogate how these perceptions are molded and reinforced to create marginalization. Intersectionality is increasingly becoming a lens and approach to understanding better inequalities and oppression, gender rights and equality, the challenges towards its achievement, and how best to move forward in the fight for gender rights, inclusion, and equality. In light of this, this paper looks at intersectional representations of gender in the media within cultural/social contexts -particularly entertainment media- and how this influences perceptions of gender and impacts progress toward achieving gender equality and advocacy. Furthermore, the paper explores how various identities and, to an extent, personal experiences play a role in the perceptions of and representations of gender, as well as influence the development of policies that promote gender equality in general. Finally, the paper applies qualitative and auto-ethnographic research methods building on intersectional and social construction frameworks to analyze gender representation in media using a literature review of scholarly works, news items, and cultural/social sources like Nigerian movies. It concludes that media influences ideas and perceptions of gender, gender equality, and rights; there isn’t enough being done in the media in the global south in general to challenge the hegemonic patriarchal and binary concepts of gender. As such, the growth of feminism and the attainment of gender equality is slow, and the concepts are often misunderstood. There is a need to leverage media outlets to influence perceptions and start informed conversations on gender equality and feminism; build collective consciousness locally to improve advocacy for equal gender rights. Changing the gender narrative in everyday media, including entertainment media, is one way to influence public perceptions of gender, promote the concept of gender equality, and advocate for policies that support equality.Keywords: gender equality, gender roles/socialization, intersectionality, representation of gender in media
Procedia PDF Downloads 10589 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation
Authors: Mohammad Abu-Shaira, Weishi Shi
Abstract:
Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression
Procedia PDF Downloads 1188 Perception of Eco-Music From the Contents the Earth’s Sound Ecosystem
Authors: Joni Asitashvili, Eka Chabashvili, Maya Virsaladze, Alexander Chokhonelidze
Abstract:
Studying the soundscape is a major challenge in many countries of the civilized world today. The sound environment and music itself are part of the Earth's ecosystem. Therefore, researching its positive or negative impact is important for a clean and healthy environment. The acoustics of nature gave people many musical ideas, and people enriched musical features and performance skills with the ability to imitate the surrounding sound. For example, a population surrounded by mountains invented the technique of antiphonal singing, which mimics the effect of an echo. Canadian composer Raymond Murray Schafer viewed the world as a kind of musical instrument with ever-renewing tuning. He coined the term "Soundscape" as a name of a natural environmental sound, including the sound field of the Earth. It can be said that from which the “music of nature” is constructed. In the 21st century, a new field–Ecomusicology–has emerged in the field of musical art to study the sound ecosystem and various issues related to it. Ecomusicology considers the interconnections between music, culture, and nature–According to the Aaron Allen. Eco-music is a field of ecomusicology concerning with the depiction and realization of practical processes using modern composition techniques. Finding an artificial sound source (instrumental or electronic) for the piece that will blend into the soundscape of Sound Oases. Creating a composition, which sounds in harmony with the vibrations of human, nature, environment, and micro- macrocosm as a whole; Currently, we are exploring the ambient sound of the Georgian urban and suburban environment to discover “Sound Oases" and compose Eco-music works. We called “Sound Oases" an environment with a specific sound of the ecosystem to use in the musical piece as an instrument. The most interesting examples of Eco-music are the round dances, which were already created in the BC era. In round dances people would feel the united energy. This urge to get united revealed itself in our age too, manifesting itself in a variety of social media. The virtual world, however, is not enough for a healthy interaction; we created plan of “contemporary round dance” in sound oasis, found during expedition in Georgian caves, where people interacted with cave's soundscape and eco-music, they feel each other sharing energy and listen to earth sound. This project could be considered a contemporary round dance, a long improvisation, particular type of art therapy, where everyone can participate in an artistic process. We would like to present research result of our eco-music experimental performance.Keywords: eco-music, environment, sound, oasis
Procedia PDF Downloads 6187 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers
Procedia PDF Downloads 14886 Clinical Features, Diagnosis and Treatment Outcomes in Necrotising Autoimmune Myopathy: A Rare Entity in the Spectrum of Inflammatory Myopathies
Authors: Tamphasana Wairokpam
Abstract:
Inflammatory myopathies (IMs) have long been recognised as a heterogenous family of myopathies with acute, subacute, and sometimes chronic presentation and are potentially treatable. Necrotizing autoimmune myopathies (NAM) are a relatively new subset of myopathies. Patients generally present with subacute onset of proximal myopathy and significantly elevated creatinine kinase (CK) levels. It is being increasingly recognised that there are limitations to the independent diagnostic utility of muscle biopsy. Immunohistochemistry tests may reveal important information in these cases. The traditional classification of IMs failed to recognise NAM as a separate entity and did not adequately emphasize the diversity of IMs. This review and case report on NAM aims to highlight the heterogeneity of this entity and focus on the distinct clinical presentation, biopsy findings, specific auto-antibodies implicated, and available treatment options with prognosis. This article is a meta-analysis of literatures on NAM and a case report illustrating the clinical course, investigation and biopsy findings, antibodies implicated, and management of a patient with NAM. The main databases used for the search were Pubmed, Google Scholar, and Cochrane Library. Altogether, 67 publications have been taken as references. Two biomarkers, anti-signal recognition protein (SRP) and anti- hydroxyl methylglutaryl-coenzyme A reductase (HMGCR) Abs, have been found to have an association with NAM in about 2/3rd of cases. Interestingly, anti-SRP associated NAM appears to be more aggressive in its clinical course when compared to its anti-HMGCR associated counterpart. Biopsy shows muscle fibre necrosis without inflammation. There are reports of statin-induced NAM where progression of myopathy has been seen even after discontinuation of statins, pointing towards an underlying immune mechanism. Diagnosisng NAM is essential as it requires more aggressive immunotherapy than other types of IMs. Most cases are refractory to corticosteroid monotherapy. Immunosuppressive therapy with other immunotherapeutic agents such as IVIg, rituximab, mycophenolate mofetil, azathioprine has been explored and found to have a role in the treatment of NAM. In conclusion,given the heterogeneity of NAM, it appears that NAM is not just a single entity but consists of many different forms, despite the similarities in presentation and its classification remains an evolving field. A thorough understanding of underlying mechanism and the clinical correlation with antibodies associated with NAM is essential for efficacious management and disease prognostication.Keywords: inflammatory myopathies, necrotising autoimmune myopathies, anti-SRP antibody, anti-HMGCR antibody, statin induced myopathy
Procedia PDF Downloads 10385 Tuning the Emission Colour of Phenothiazine by Introduction of Withdrawing Electron Groups
Authors: Andrei Bejan, Luminita Marin, Dalila Belei
Abstract:
Phenothiazine with electron-rich nitrogen and sulfur heteroatoms has a high electron-donating ability which promotes a good conjugation and therefore low band-gap with consequences upon charge carrier mobility improving and shifting of light emission in visible domain. Moreover, its non-planar butterfly conformation inhibits molecular aggregation and thus preserves quite well the fluorescence quantum yield in solid state compared to solution. Therefore phenothiazine and its derivatives are promising hole transport materials for use in organic electronic and optoelectronic devices as light emitting diodes, photovoltaic cells, integrated circuit sensors or driving circuits for large area display devices. The objective of this paper was to obtain a series of new phenothiazine derivatives by introduction of different electron withdrawing substituents as formyl, carboxyl and cyanoacryl units in order to create a push pull system which has potential to improve the electronic and optical properties. Bromine atom was used as electrono-donor moiety to extend furthermore the existing conjugation. The understudy compounds were structural characterized by FTIR and 1H-NMR spectroscopy and single crystal X-ray diffraction. Besides, the single crystal X-ray diffraction brought information regarding the supramolecular architecture of the compounds. Photophysical properties were monitored by UV-vis and photoluminescence spectroscopy, while the electrochemical behavior was established by cyclic voltammetry. The absorption maxima of the studied compounds vary in a large range (322-455 nm), reflecting the different electronic delocalization degree, depending by the substituent nature. In a similar manner, the emission spectra reveal different color of emitted light, a red shift being evident for the groups with higher electron withdrawing ability. The emitted light is pure and saturated for the compounds containing strong withdrawing formyl or cyanoacryl units and reach the highest quantum yield of 71% for the compound containing bromine and cyanoacrilic units. Electrochemical study show reversible oxidative and reduction processes for all the compounds and a close correlation of the HOMO-LUMO band gap with substituent nature. All these findings suggest the obtained compounds as promising materials for optoelectronic devices.Keywords: electrochemical properties, phenothiazine derivatives, photoluminescence, quantum yield
Procedia PDF Downloads 329