Search results for: statistical machine translation
5350 Assessment of Barriers to the Clinical Adoption of Cell-Based Therapeutics
Authors: David Pettitt, Benjamin Davies, Georg Holländer, David Brindley
Abstract:
Cellular based therapies, whose origins can be traced from the intertwined concepts of tissue engineering and regenerative medicine, have the potential to transform the current medical landscape and offer an approach to managing what were once considered untreatable diseases. However, despite a large increase in basic science activity in the cell therapy arena alongside a growing portfolio of cell therapy trials, the number of industry products available for widespread clinical use correlates poorly with such a magnitude of activity, with the number of cell-based therapeutics in mainstream use remaining comparatively low. This research serves to quantitatively assess the barriers to the clinical adoption of cell-based therapeutics through identification of unique barriers, specific challenges and opportunities facing the development and adoption of such therapies.Keywords: cell therapy, clinical adoption, commercialization, translation
Procedia PDF Downloads 4005349 Characteristics of Cumulative Distribution Function of Grown Crack Size at Specified Fatigue Crack Propagation Life under Different Maximum Fatigue Loads in AZ31
Authors: Seon Soon Choi
Abstract:
Magnesium alloy has been widely used in structure such as an automobile. It is necessary to consider probabilistic characteristics of a structural material because a fatigue behavior of a structure has a randomness and uncertainty. The purpose of this study is to find the characteristics of the cumulative distribution function (CDF) of the grown crack size at a specified fatigue crack propagation life and to investigate a statistical crack propagation in magnesium alloys. The statistical fatigue data of the grown crack size are obtained through the fatigue crack propagation (FCP) tests under different maximum fatigue load conditions conducted on the replicated specimens of magnesium alloys. The 3-parameter Weibull distribution is used to find the CDF of grown crack size. The CDF of grown crack size in case of larger maximum fatigue load has longer tail in below 10 percent and above 90 percent. The fatigue failure occurs easily as the tail of CDF of grown crack size becomes long. The fatigue behavior under the larger maximum fatigue load condition shows more rapid propagation and failure mode.Keywords: cumulative distribution function, fatigue crack propagation, grown crack size, magnesium alloys, maximum fatigue load
Procedia PDF Downloads 2885348 Hardware in the Loop Platform for Virtual Commissioning: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Ana Maria Macarulla
Abstract:
Hydraulic-press commissioning consumes a great amount of man-hours, due to the fact that it takes place several miles away from where it has been designed. This factor became exacerbated due to control designers’ lack of knowledge about which will be the final controller gains before they start working with it. Virtual commissioning has been postulated as an optimal solution to deal with this lack of knowledge. Here, a case study is presented in which a controller is set up against a real-time model based on a hydraulic-press. The press model is designed following manufacturer specifications and it is embedded in a real-time simulator. This methodology ensures that the model achieves similar responses as the real machine that would be placed on the industry. A deterministic communication protocol is in charge of the bidirectional information transmission between the real-time model and the controller. This platform allows the engineer to test and verify the final control responses with exactly the same hardware that is going to be installed in the hydraulic-press, in other words, realize a virtual commissioning of the electro-hydraulic actuator. The Hardware in the Loop (HiL) platform validates in laboratory conditions and harmless for the machine the control algorithms designed, which allows embedding them afterwards in the industrial environment without further modifications.Keywords: deterministic communication protocol, electro-hydraulic actuator, hardware in the loop, real-time, virtual commissioning
Procedia PDF Downloads 1435347 Effectiveness of Homoeopathic Medicine Conium Maculatum 200 C for Management of Pyuria
Authors: Amir Ashraf
Abstract:
Homoeopathy is an alternative system of medicine discovered by German physician Samuel Hahnemann in 1796. It has been used by several people for various health conditions globally for more than last 200 years. In India, homoeopathy is considered as a major system of alternative medicine. Homoeopathy is found effective in various medical conditions including Pyuria. Pyuria is the condition in which pus cells are found in urine. Homoeopathy is very useful for reducing pus cells, and homeopathically potentized Conium Mac (Hemlock) is an important remedy commonly used for reducing pyuria. Aim: To reduce the amount pus cells found in urine using Conium Mac 200C. Methods: Design. Small N Design. Samples: Purposive Sampling with 5 cases diagnosed as pyuria. Tools: Personal Data Schedule and ICD-10 Criteria for Pyuria. Techniques: Potentized homoeopathic medicine, Conium Mac 200th potency is used. Statistical Analysis: The statistical analyses were done using non-parametric tests. Results: There is significant pre/post difference has been identified. Conclusion: Homoeopathic potency, Conium Mac 200 C is effective in reducing the increased level of pus cells found in urine samples.Keywords: homoeopathy, alternative medicine, Pyuria, Conim Mac, small N design, non-parametric tests, homeopathic physician, Ashirvad Hospital, Kannur
Procedia PDF Downloads 3355346 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes
Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse
Abstract:
Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools. Procedia PDF Downloads 05345 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering
Authors: Sharifah Mousli, Sona Taheri, Jiayuan He
Abstract:
Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.Keywords: autism spectrum disorder, clustering, optimization, unsupervised machine learning
Procedia PDF Downloads 1165344 Statistical Model to Examine the Impact of the Inflation Rate and Real Interest Rate on the Bahrain Economy
Authors: Ghada Abo-Zaid
Abstract:
Introduction: Oil is one of the most income source in Bahrain. Low oil price influence on the economy growth and the investment rate in Bahrain. For example, the economic growth was 3.7% in 2012, and it reduced to 2.9% in 2015. Investment rate was 9.8% in 2012, and it is reduced to be 5.9% and -12.1% in 2014 and 2015, respectively. The inflation rate is increased to the peak point in 2013 with 3.3 %. Objectives: The objectives here are to build statistical models to examine the effect of the interest rate inflation rate on the growth economy in Bahrain from 2000 to 2018. Methods: This study based on 18 years, and the multiple regression model is used for the analysis. All of the missing data are omitted from the analysis. Results: Regression model is used to examine the association between the Growth national product (GNP), the inflation rate, and real interest rate. We found that (i) Increase the real interest rate decrease the GNP. (ii) Increase the inflation rate does not effect on the growth economy in Bahrain since the average of the inflation rate was almost 2%, and this is considered as a low percentage. Conclusion: There is a positive impact of the real interest rate on the GNP in Bahrain. While the inflation rate does not show any negative influence on the GNP as the inflation rate was not large enough to effect negatively on the economy growth rate in Bahrain.Keywords: growth national product, egypt, regression model, interest rate
Procedia PDF Downloads 1665343 Charting Sentiments with Naive Bayes and Logistic Regression
Authors: Jummalla Aashrith, N. L. Shiva Sai, K. Bhavya Sri
Abstract:
The swift progress of web technology has not only amassed a vast reservoir of internet data but also triggered a substantial surge in data generation. The internet has metamorphosed into one of the dynamic hubs for online education, idea dissemination, as well as opinion-sharing. Notably, the widely utilized social networking platform Twitter is experiencing considerable expansion, providing users with the ability to share viewpoints, participate in discussions spanning diverse communities, and broadcast messages on a global scale. The upswing in online engagement has sparked a significant curiosity in subjective analysis, particularly when it comes to Twitter data. This research is committed to delving into sentiment analysis, focusing specifically on the realm of Twitter. It aims to offer valuable insights into deciphering information within tweets, where opinions manifest in a highly unstructured and diverse manner, spanning a spectrum from positivity to negativity, occasionally punctuated by neutrality expressions. Within this document, we offer a comprehensive exploration and comparative assessment of modern approaches to opinion mining. Employing a range of machine learning algorithms such as Naive Bayes and Logistic Regression, our investigation plunges into the domain of Twitter data streams. We delve into overarching challenges and applications inherent in the realm of subjectivity analysis over Twitter.Keywords: machine learning, sentiment analysis, visualisation, python
Procedia PDF Downloads 565342 Dynamic Cellular Remanufacturing System (DCRS) Design
Authors: Tariq Aljuneidi, Akif Asil Bulgak
Abstract:
Remanufacturing may be defined as the process of bringing used products to “like-new” functional state with warranty to match, and it is one of the most popular product end-of-life scenarios. An efficient remanufacturing network lead to an efficient design of sustainable manufacturing enterprise. In remanufacturing network, products are collected from the customer zone, disassembled and remanufactured at a suitable remanufacturing facility. In this respect, another issue to consider is how the returned product to be remanufactured, in other words, what is the best layout for such facility. In order to achieve a sustainable manufacturing system, Cellular Manufacturing System (CMS) designs are highly recommended, CMSs combine high throughput rates of line layouts with the flexibility offered by functional layouts (job shop). Introducing the CMS while designing a remanufacturing network will benefit the utilization of such a network. This paper presents and analyzes a comprehensive mathematical model for the design of Dynamic Cellular Remanufacturing Systems (DCRSs). In this paper, the proposed model is the first one to date that consider CMS and remanufacturing system simultaneously. The proposed DCRS model considers several manufacturing attributes such as multi-period production planning, dynamic system reconfiguration, duplicate machines, machine capacity, available time for workers, worker assignments, and machine procurement, where the demand is totally satisfied from a returned product. A numerical example is presented to illustrate the proposed model.Keywords: cellular manufacturing system, remanufacturing, mathematical programming, sustainability
Procedia PDF Downloads 3785341 Brazilian Sign Language: A Synthesis of the Research in the Period from 2000 to 2017
Authors: Maria da Gloria Guara-Tavares
Abstract:
This article reports a synthesis of the research in Brazilian Sign Language conducted from 2000 to 2017. The objective of the synthesis was to identify the most researched areas and the most used methodologies. Articles published in three Brazilian journals of Translation Studies, unpublished dissertations and theses were included in the analysis. Abstracts and the method sections of the papers were scrutinized. Sixty studies were analyzed, and overall results indicate that the research in Brazilian Sign Language has been fragmented in several areas such as linguistic aspects, facial expressions, subtitling, identity issues, bilingualism, and interpretation strategies. Concerning research methods, the synthesis reveals that most research is qualitative in nature. Moreover, results show that the cognitive aspects of Brazilian Sign Language seem to be poorly explored. Implications for a future research agenda are also discussed.Keywords: Brazilian sign language, qualitative methods, research agenda, synthesis
Procedia PDF Downloads 2405340 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow
Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam
Abstract:
Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety
Procedia PDF Downloads 3025339 Regional Flood-Duration-Frequency Models for Norway
Authors: Danielle M. Barna, Kolbjørn Engeland, Thordis Thorarinsdottir, Chong-Yu Xu
Abstract:
Design flood values give estimates of flood magnitude within a given return period and are essential to making adaptive decisions around land use planning, infrastructure design, and disaster mitigation. Often design flood values are needed at locations with insufficient data. Additionally, in hydrologic applications where flood retention is important (e.g., floodplain management and reservoir design), design flood values are required at different flood durations. A statistical approach to this problem is a development of a regression model for extremes where some of the parameters are dependent on flood duration in addition to being covariate-dependent. In hydrology, this is called a regional flood-duration-frequency (regional-QDF) model. Typically, the underlying statistical distribution is chosen to be the Generalized Extreme Value (GEV) distribution. However, as the support of the GEV distribution depends on both its parameters and the range of the data, special care must be taken with the development of the regional model. In particular, we find that the GEV is problematic when developing a GAMLSS-type analysis due to the difficulty of proposing a link function that is independent of the unknown parameters and the observed data. We discuss these challenges in the context of developing a regional QDF model for Norway.Keywords: design flood values, bayesian statistics, regression modeling of extremes, extreme value analysis, GEV
Procedia PDF Downloads 725338 Examination of the Main Behavioral Patterns of Male and Female Students in Islamic Azad University
Authors: Sobhan Sobhani
Abstract:
This study examined the behavioral patterns of student and their determinants according to the "symbolic interaction" sociological perspective in the form of 7 hypotheses. Behavioral patterns of students were classified in 8 categories: religious, scientific, political, artistic, sporting, national, parents and teachers. They were evaluated by student opinions by a five-point Likert rating scale. The statistical population included all male and female students of Islamic Azad University, Behabahan branch, among which 600 patients (268 females and 332 males) were selected randomly. The following statistical methods were used: frequency and percentage, mean, t-test, Pearson correlation coefficient and multi-way analysis of variance. The results obtained from statistical analysis showed that: 1-There is a significant difference between male and female students in terms of disposition to religious figures, artists, teachers and parents. 2-There is a significant difference between students of urban and rural areas in terms of assuming behavioral patterns of religious, political, scientific, artistic, national figures and teachers. 3-The most important criterion for selecting behavioral patterns of students is intellectual understanding with the pattern. 4-The most important factor influencing the behavioral patterns of male and female students is parents followed by friends. 5-Boys are affected by teachers, the Internet and satellite programs more than girls. Girls assume behavioral patterns from books more than boys. 6-There is a significant difference between students in human sciences, technical, medical and engineering disciplines in terms of selecting religious and political figures as behavioral patterns. 7-There is a significant difference between students belonging to different subcultures in terms of assuming behavioral patterns of religious, scientific and cultural figures. 8-Between the first and fourth year students in terms of selecting behavioral patterns, there is a significant difference only in selecting religious figures. 9-There is a significant negative correlation between the education level of parents and the selection of religious and political figures and teachers. 10-There is a significant negative correlation between family income and the selection of political and religious figures.Keywords: behavioral patterns, behavioral patterns, male and female students, Islamic Azad University
Procedia PDF Downloads 3655337 Performance Analysis of 5G for Low Latency Transmission Based on Universal Filtered Multi-Carrier Technique and Interleave Division Multiple Access
Authors: A. Asgharzadeh, M. Maroufi
Abstract:
5G mobile communication system has drawn more and more attention. The 5G system needs to provide three different types of services, including enhanced Mobile BroadBand (eMBB), massive machine-type communication (mMTC), and ultra-reliable and low-latency communication (URLLC). Universal Filtered Multi-Carrier (UFMC), Filter Bank Multicarrier (FBMC), and Filtered Orthogonal Frequency Division Multiplexing (f-OFDM) are suggested as a well-known candidate waveform for the coming 5G system. Themachine-to-machine (M2M) communications are one of the essential applications in 5G, and it involves exchanging of concise messages with a very short latency. However, in UFMC systems, the subcarriers are grouped into subbands but f-OFDM only one subband covers the entire band. Furthermore, in FBMC, a subband includes only one subcarrier, and the number of subbands is the same as the number of subcarriers. This paper mainly discusses the performance of UFMC with different parameters for the UFMC system. Also, paper shows that UFMC is the best choice outperforming OFDM in any case and FBMC in case of very short packets while performing similarly for long sequences with channel estimation techniques for Interleave Division Multiple Access (IDMA) systems.Keywords: universal filtered multi-carrier technique, UFMC, interleave division multiple access, IDMA, fifth-generation, subband
Procedia PDF Downloads 1345336 Simulation Based Analysis of Gear Dynamic Behavior in Presence of Multiple Cracks
Authors: Ahmed Saeed, Sadok Sassi, Mohammad Roshun
Abstract:
Gears are important components with a vital role in many rotating machines. One of the common gear failure causes is tooth fatigue crack; however, its early detection is still a challenging task. The objective of this study is to develop a numerical model that simulates the effect of teeth cracks on the resulting gears vibrations and permits consequently to perform an early fault detection. In contrast to other published papers, this work incorporates the possibility of multiple simultaneous cracks with different depths. As cracks alter significantly the stiffness of the tooth, finite element software is used to determine the stiffness variation with respect to the angular position, for different combinations of crack orientation and depth. A simplified six degrees of freedom nonlinear lumped parameter model of a one-stage spur gear system is proposed to study the vibration with and without cracks. The model developed for calculating the stiffness with the crack permitted to update the physical parameters of the second-degree-of-freedom equations of motions describing the vibration of the gearbox. The vibration simulation results of the gearbox were by obtained using Simulink/Matlab. The effect of one crack with different levels was studied thoroughly. The change in the mesh stiffness and the vibration response were found to be consistent with previously published works. In addition, various statistical time domain parameters were considered. They showed different degrees of sensitivity toward the crack depth. Multiple cracks were also introduced at different locations and the vibration response along with the statistical parameters were obtained again for a general case of degradation (increase in crack depth, crack number and crack locations). It was found that although some parameters increase in value as the deterioration level increases, they show almost no change or even decrease when the number of cracks increases. Therefore, the use of any statistical parameters could be misleading if not considered in an appropriate way.Keywords: Spur gear, cracked tooth, numerical simulation, time-domain parameters
Procedia PDF Downloads 2665335 Impact of Yogic Exercise on Cardiovascular Function on Selected College Students of High Altitude
Authors: Benu Gupta
Abstract:
The purpose of the study was to assess the impact of yogic exercise on cardiovascular exercises on selected college students of high altitude. The research was conducted on college students of high altitude in Shimla for their cardiovascular function [Blood Pressure (BP), VO2 Max (TLC) and Pulse Rate (PR)] in respect to yogic exercise. Total 139 students were randomly selected from Himachal University colleges in Shimla. The study was conducted in three phases. The subjects were identified in the first phase of research program then further in next phase they were physiologically tested, and yogic exercise battery was operated in different time frame. The entire subjects were treated with three months yogic exercise. The entire lot of students were again evaluated physiologically [(Cardiovascular measurement: Blood Pressure (BP), VO2 Max (TLC) and Pulse Rate (PR)] with standard equipments. The statistical analyses of the variance (PR, BP (SBP & DBP) and TLC) were done. The result reveals that there was a significant difference in TLC; whereas there was no significant difference in PR. For BP statistical analysis suggests no significant difference were formed. Result showed that the BP of the participants were more inclined towards normal standard BP i.e. 120-80 mmHg.Keywords: cardiovascular function, college students, high altitude, yogic exercise
Procedia PDF Downloads 2315334 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 1715333 Statistical Tools for SFRA Diagnosis in Power Transformers
Authors: Rahul Srivastava, Priti Pundir, Y. R. Sood, Rajnish Shrivastava
Abstract:
For the interpretation of the signatures of sweep frequency response analysis(SFRA) of transformer different types of statistical techniques serves as an effective tool for doing either phase to phase comparison or sister unit comparison. In this paper with the discussion on SFRA several statistics techniques like cross correlation coefficient (CCF), root square error (RSQ), comparative standard deviation (CSD), Absolute difference, mean square error(MSE),Min-Max ratio(MM) are presented through several case studies. These methods require sample data size and spot frequencies of SFRA signatures that are being compared. The techniques used are based on power signal processing tools that can simplify result and limits can be created for the severity of the fault occurring in the transformer due to several short circuit forces or due to ageing. The advantages of using statistics techniques for analyzing of SFRA result are being indicated through several case studies and hence the results are obtained which determines the state of the transformer.Keywords: absolute difference (DABS), cross correlation coefficient (CCF), mean square error (MSE), min-max ratio (MM-ratio), root square error (RSQ), standard deviation (CSD), sweep frequency response analysis (SFRA)
Procedia PDF Downloads 6975332 Hazard Alert in Malaysia Related to Occupational Safety and Health
Authors: Atikah Binti Azudin, Nurin Nazlah Binti Muhamad Yani, Nur Alya Nadhirah Binti Naaidith, Nur Amylia Wahida Binti Mat Ayob, Nurshamimi Shakirah Binti Suboh, Nur Auni Batrisyia Binti Md. Zaini, Nur Aziemah Binti Mohamad, Nurul Suffiyah Binti Sa’Dun, Sabrina Sasha Izzati Binti Zubaile, Umi Huwaina Binti Ahmiruddin, Wan Nur Shafawati Binti Wan Ghazali
Abstract:
A hazard alert is intended to provide brief information about significant incidents or existing difficulties in Department workplaces. The alert gives guidelines for proper processes, practices, and controls to be applied. When operated in accordance with the manufacturer's instructions, any machine or tool utilized at work provides a safe and dependable platform for workers to accomplish job duties. However, when not utilized appropriately, the machine might pose a major hazard to employees. Employers have a duty to keep employees safe in this scenario. This Hazard Alert outlines specific occupational dangers and the controls that employers must apply to prevent injury or fatal accidents. There have been several cases of hazard alerts in Malaysia, which have had a negative impact on a few workers. Looking on the bright side, we can overcome every incident in a variety of ways. One of these is that only qualified individuals operate mobile machinery and equipment. In addition, employees may also perform frequent pre-use inspections of machinery to discover and fix flaws. Hazard alert is very important, and this study would cover a variety of subjects, including the methods employed.Keywords: safe, hazard, impacts, duties.
Procedia PDF Downloads 925331 A Review on Existing Challenges of Data Mining and Future Research Perspectives
Authors: Hema Bhardwaj, D. Srinivasa Rao
Abstract:
Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges
Procedia PDF Downloads 1105330 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research
Authors: V. Bezverbny
Abstract:
The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification
Procedia PDF Downloads 1935329 Multi-Elemental Analysis Using Inductively Coupled Plasma Mass Spectrometry for the Geographical Origin Discrimination of Greek Giant Beans “Gigantes Elefantes”
Authors: Eleni C. Mazarakioti, Anastasios Zotos, Anna-Akrivi Thomatou, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas
Abstract:
“Gigantes Elefantes” is a particularly dynamic crop of giant beans cultivated in western Macedonia (Greece). This variety of large beans growing in this area and specifically in the regions of Prespes and Kastoria is a protected designation of origin (PDO) species with high nutritional quality. Mislabeling of geographical origin and blending with unidentified samples are common fraudulent practices in Greek food market with financial and possible health consequences. In the last decades, multi-elemental composition analysis has been used in identifying the geographical origin of foods and agricultural products. In an attempt to discriminate the authenticity of Greek beans, multi-elemental analysis (Ag, Al, As, B, Ba, Be, Ca, Cd, Co, Cr, Cs, Cu, Fe, Ga, Ge, K, Li, Mg, Mn, Mo, Na, Nb, Ni, P, Pb, Rb, Re, Se, Sr, Ta, Ti, Tl, U, V, W, Zn, Zr) was performed by inductively coupled plasma mass spectrometry (ICP-MS) on 320 samples of beans, originated from Greece (Prespes and Kastoria), China and Poland. All samples were collected during the autumn of 2021. The obtained data were analysed by principal component analysis (PCA), an unsupervised statistical method, which allows for to reduce of the dimensionality of the enormous datasets. Statistical analysis revealed a clear separation of beans that had been cultivated in Greece compared with those from China and Poland. An adequate discrimination of geographical origin between bean samples originating from the two Greece regions, Prespes and Kastoria, was also evident. Our results suggest that multi-elemental analysis combined with the appropriate multivariate statistical method could be a useful tool for bean’s geographical authentication. Acknowledgment: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.Keywords: geographical origin, authenticity, multi-elemental analysis, beans, ICP-MS, PCA
Procedia PDF Downloads 785328 Application of Stochastic Models to Annual Extreme Streamflow Data
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.Keywords: stochastic models, ARIMA, extreme streamflow, Karkheh river
Procedia PDF Downloads 1485327 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 1795326 Development of Trigger Tool to Identify Adverse Drug Events From Warfarin Administered to Patient Admitted in Medical Wards of Chumphae Hospital
Authors: Puntarikorn Rungrattanakasin
Abstract:
Objectives: To develop the trigger tool to warn about the risk of bleeding as an adverse event from warfarin drug usage during admission in Medical Wards of Chumphae Hospital. Methods: A retrospective study was performed by reviewing the medical records for the patients admitted between June 1st,2020- May 31st, 2021. ADEs were evaluated by Naranjo’s algorithm. The international normalized ratio (INR) and events of bleeding during admissions were collected. Statistical analyses, including Chi-square test and Reciever Operating Characteristic (ROC) curve for optimal INR threshold, were used for the study. Results: Among the 139 admissions, the INR range was found to vary between 0.86-14.91, there was a total of 15 bleeding events, out of which 9 were mild, and 6 were severe. The occurrence of bleeding started whenever the INR was greater than 2.5 and reached the statistical significance (p <0.05), which was in concordance with the ROC curve and yielded 100 % sensitivity and 60% specificity in the detection of a bleeding event. In this regard, the INR greater than 2.5 was considered to be an optimal threshold to alert promptly for bleeding tendency. Conclusions: The INR value of greater than 2.5 (>2.5) would be an appropriate trigger tool to warn of the risk of bleeding for patients taking warfarin in Chumphae Hospital.Keywords: trigger tool, warfarin, risk of bleeding, medical wards
Procedia PDF Downloads 1485325 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 565324 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk
Authors: Yilin Liao, Hewen Li, Paula McConvey
Abstract:
Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.Keywords: artificial neural networks, concussion, machine learning, impact, speed skater
Procedia PDF Downloads 1095323 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model
Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma
Procedia PDF Downloads 815322 Noise Measurement and Awareness at Construction Site: A Case Study
Authors: Feiruz Ab'lah, Zarini Ismail, Mohamad Zaki Hassan, Siti Nadia Mohd Bakhori, Mohamad Azlan Suhot, Mohd Yusof Md. Daud, Shamsul Sarip
Abstract:
The construction industry is one of the major sectors in Malaysia. Apart from providing facilities, services, and goods it also offers employment opportunities to local and foreign workers. In fact, the construction workers are exposed to a hazardous level of noises that generated from various sources including excavators, bulldozers, concrete mixer, and piling machines. Previous studies indicated that the piling and concrete work was recorded as the main source that contributed to the highest level of noise among the others. Therefore, the aim of this study is to obtain the noise exposure during piling process and to determine the awareness of workers against noise pollution at the construction site. Initially, the reading of noise was obtained at construction site by using a digital sound level meter (SLM), and noise exposure to the workers was mapped. Readings were taken from four different distances; 5, 10, 15 and 20 meters from the piling machine. Furthermore, a set of questionnaire was also distributed to assess the knowledge regarding noise pollution at the construction site. The result showed that the mean noise level at 5m distance was more than 90 dB which exceeded the recommended level. Although the level of awareness regarding the effect of noise pollution is satisfactory, majority of workers (90%) still did not wear ear protecting device during work period. Therefore, the safety module guidelines related to noise pollution controls should be implemented to provide a safe working environment and prevent initial occupational hearing loss.Keywords: construction, noise awareness, noise pollution, piling machine
Procedia PDF Downloads 3855321 Improved Classification Procedure for Imbalanced and Overlapped Situations
Authors: Hankyu Lee, Seoung Bum Kim
Abstract:
The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.Keywords: classification, imbalanced data with class overlap, split data space, support vector machine
Procedia PDF Downloads 308