Search results for: foundation models
6641 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic
Abstract:
The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences
Procedia PDF Downloads 3196640 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1036639 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 316638 Equilibrium and Kinetic Studies of Lead Adsorption on Activated Carbon Derived from Mangrove Propagule Waste by Phosphoric Acid Activation
Authors: Widi Astuti, Rizki Agus Hermawan, Hariono Mukti, Nurul Retno Sugiyono
Abstract:
The removal of lead ion (Pb2+) from aqueous solution by activated carbon with phosphoric acid activation employing mangrove propagule as precursor was investigated in a batch adsorption system. Batch studies were carried out to address various experimental parameters including pH and contact time. The Langmuir and Freundlich models were able to describe the adsorption equilibrium, while the pseudo first order and pseudo second order models were used to describe kinetic process of Pb2+ adsorption. The results show that the adsorption data are seen in accordance with Langmuir isotherm model and pseudo-second order kinetic model.Keywords: activated carbon, adsorption, equilibrium, kinetic, lead, mangrove propagule
Procedia PDF Downloads 1656637 Housing Delivery in Nigeria: Repackaging for Sustainable Development
Authors: Funmilayo L. Amao, Amos O. Amao
Abstract:
It has been observed that majority of the people are living in poor housing quality or totally homeless in urban center despite all governmental policies to provide housing to the public. On the supply side, various government policies in the past have been formulated towards overcoming the huge shortage through several Housing Reform Programmes. Despite these past efforts, housing continues to be a mirage to ordinary Nigerian. Currently, there are various mass housing delivery programmes such as the affordable housing scheme that utilize the Public Private Partnership effort and several Private Finance Initiative models could only provide for about 3% of the required stock. This suggests the need for a holistic solution in approaching the problem. The aim of this research is to find out the problems hindering the delivery of housing in Nigeria and its effects on housing affordability. The specific objectives are to identify the causes of housing delivery problems, to examine different housing policies over years and to suggest a way out for sustainable housing delivery. This paper also reviews the past and current housing delivery programmes in Nigeria and analyses the demand and supply side issues. It identifies the various housing delivery mechanisms in current practice. The objective of this paper, therefore, is to give you an insight into the delivery option for the sustainability of housing in Nigeria, given the existing delivery structures and the framework specified in the New National Housing Policy. The secondary data were obtained from books, journals and seminar papers. The conclusion is that we cannot copy models from other nations, but should rather evolve workable models based on our socio-cultural background to address the huge housing shortage in Nigeria. Recommendations are made in this regard.Keywords: housing, sustainability, housing delivery, housing policy, housing affordability
Procedia PDF Downloads 2946636 Implementation of Lean Production in Business Enterprises: A Literature-Based Content Analysis of Implementation Procedures
Authors: P. Pötters, A. Marquet, B. Leyendecker
Abstract:
The objective of this paper is to investigate different implementation approaches for the implementation of Lean production in companies. Furthermore, a structured overview of those different approaches is to be made. Therefore, the present work is intended to answer the following research question: What differences and similarities exist between the various systematic approaches and phase models for the implementation of Lean Production? To present various approaches for the implementation of Lean Production discussed in the literature, a qualitative content analysis was conducted. Within the framework of a qualitative survey, a selection of texts dealing with lean production and its introduction was examined. The analysis presents different implementation approaches from the literature, covering the descriptive aspect of the study. The study also provides insights into similarities and differences among the implementation approaches, which are drawn from the analysis of latent text contents and author interpretations. In this study, the focus is on identifying differences and similarities among systemic approaches for implementing Lean Production. The research question takes into account the main object of consideration, objectives pursued, starting point, procedure, and endpoint of the implementation approach. The study defines the concept of Lean Production and presents various approaches described in literature that companies can use to implement Lean Production successfully. The study distinguishes between five systemic implementation approaches and seven phase models to help companies choose the most suitable approach for their implementation project. The findings of this study can contribute to enhancing transparency regarding the existing approaches for implementing Lean Production. This can enable companies to compare and contrast the available implementation approaches and choose the most suitable one for their specific project.Keywords: implementation, lean production, phase models, systematic approaches
Procedia PDF Downloads 1036635 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges
Authors: Dianelys Vega, Carlos Magluta, Ney Roitman
Abstract:
The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction
Procedia PDF Downloads 1296634 Research on Residential Block Fabric: A Case Study of Hangzhou West Area
Abstract:
Residential block construction of big cities in China began in the 1950s, and four models had far-reaching influence on modern residential block in its development process, including unit compound and residential district in 1950s to 1980s, and gated community and open community in 1990s to now. Based on analysis of the four models’ fabric, the article takes residential blocks in Hangzhou west area as an example and carries on the studies from urban structure level and block special level, mainly including urban road network, land use, community function, road organization, public space and building fabric. At last, the article puts forward semi-open sub-community strategy to improve the current fabric.Keywords: Hangzhou west area, residential block model, residential block fabric, semi-open sub-community strategy
Procedia PDF Downloads 4166633 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 426632 Debriefing Practices and Models: An Integrative Review
Authors: Judson P. LaGrone
Abstract:
Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education
Procedia PDF Downloads 1416631 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 1336630 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study
Authors: Kasim Görenekli, Ali Gülbağ
Abstract:
This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management
Procedia PDF Downloads 146629 How to Empower People to Provide Good Nutrition to Children: Bengkel Gizi Terpadu (Integrated Nutrition Workshop)
Authors: Anggun Yuliana Putri, Melisa Rahmadini
Abstract:
The Ministry of National Development Planning in Indonesia has reported that more than eight million Indonesian children are still malnourished. Based on national statistics, and a recent ranking from NGO Save the Children, Indonesia is one of 15 countries making the fastest gains in cutting child malnutrition among 165 developing countries. According to a United Nations Children’s Fund, at least 7.6 million Indonesian children under the age of 5 or one out of every three suffer from stunted growth, a primary manifestation of malnutrition in early childhood, the report ranked Indonesia as having the fifth largest number of children under 5 suffering from stunted growth worldwide. Addressing the problem of malnutrition in Indonesia, Aksi Cepat Tanggap (ACT) Foundation, a humanitarian organization working with Carrefour, acts as donor and pursues several solutions to the problem, especially of malnourished children and infants in South Tangerang area, Indonesia. The objective of this study was to examine the community empowerment driven by ACT Foundation in order to maintain the good status continuity of child and toddler after the children malnutrition recovered. Research was conducted using qualitative approach through in-depth interview and observation to find out how the Bengkel Gizi Terpadu (Integrated Nutrion Workshop) programs make people empowered. Bengkel Gizi Terpadu (BGT) is divided into 3 sequences of activities, there were: integrated malnutrition rehabilitation; provision of health education to mothers of infants and young children; and family economic empowerment to head of household. Results showed that after empowerment process has been done through training and provision of knowledge to the mothers and families about the important of nutrition and health, there were 30 of 100 mothers who participated actively. Then, there were 45 of 100 heads of household who participated in business training were able to open a business on their own which provided and controlled by ACT as stakeholder in this program. The further findings revealed that BGT programs are able to form community health workers and provide employment opportunities to community. This study believes that integrated nutrition workshop program is the solution to maintain good nutrition among children in South Tangerang, through empowerment of parents and community members, via education and business training program. Both programs prepared parents with economic sustenance and necessary information, a pre-requisite to end malnutrition in children.Keywords: community, empowerment, malnutrition, training
Procedia PDF Downloads 3276628 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.Keywords: anti-spoofing, CNN, fingerprint recognition, GAN
Procedia PDF Downloads 1836627 Towards the Reverse Engineering of UML Sequence Diagrams Using Petri Nets
Authors: C. Baidada, M. H. Abidi, A. Jakimi, E. H. El Kinani
Abstract:
Reverse engineering has become a viable method to measure an existing system and reconstruct the necessary model from tis original. The reverse engineering of behavioral models consists in extracting high-level models that help understand the behavior of existing software systems. In this paper, we propose an approach for the reverse engineering of sequence diagrams from the analysis of execution traces produced dynamically by an object-oriented application using petri nets. Our methods show that this approach can produce state diagrams in reasonable time and suggest that these diagrams are helpful in understanding the behavior of the underlying application. Finally we will discuss approachs and tools that are needed in the process of reverse engineering UML behavior. This work is a substantial step towards providing high-quality methodology for effectiveand efficient reverse engineering of sequence diagram.Keywords: reverse engineering, UML behavior, sequence diagram, execution traces, petri nets
Procedia PDF Downloads 4446626 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 726625 The Acquisition of /r/ By Setswana-Learning Children
Authors: Keneilwe Matlhaku
Abstract:
Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping
Procedia PDF Downloads 376624 A Control Model for the Dismantling of Industrial Plants
Authors: Florian Mach, Eric Hund, Malte Stonis
Abstract:
The dismantling of disused industrial facilities such as nuclear power plants or refineries is an enormous challenge for the planning and control of the logistic processes. Existing control models do not meet the requirements for a proper dismantling of industrial plants. Therefore, the paper presents an approach for the control of dismantling and post-processing processes (e.g. decontamination) in plant decommissioning. In contrast to existing approaches, the dismantling sequence and depth are selected depending on the capacity utilization of required post-processing processes by also considering individual characteristics of respective dismantling tasks (e.g. decontamination success rate, uncertainties regarding the process times). The results can be used in the dismantling of industrial plants (e.g. nuclear power plants) to reduce dismantling time and costs by avoiding bottlenecks such as capacity constraints.Keywords: dismantling management, logistics planning and control models, nuclear power plant dismantling, reverse logistics
Procedia PDF Downloads 3036623 Drying Characteristics of Shrimp by Using the Traditional Method of Oven
Authors: I. A. Simsek, S. N. Dogan, A. S. Kipcak, E. Morodor Derun, N. Tugrul
Abstract:
In this study, the drying characteristics of shrimp are studied by using the traditional drying method of oven. Drying temperatures are selected between 60-80°C. Obtained experimental drying results are applied to eleven mathematical models of Alibas, Aghbashlo et al., Henderson and Pabis, Jena and Das, Lewis, Logaritmic, Midilli and Kucuk, Page, Parabolic, Wang and Singh and Weibull. The best model was selected as parabolic based on the highest coefficient of determination (R²) (0.999990 at 80°C) and the lowest χ² (0.000002 at 80°C), and the lowest root mean square error (RMSE) (0.000976 at 80°C) values are compared to other models. The effective moisture diffusivity (Deff) values were calculated using the Fick’s second law’s cylindrical coordinate approximation and are found between 6.61×10⁻⁸ and 6.66×10⁻⁷ m²/s. The activation energy (Ea) was calculated using modified form of Arrhenius equation and is found as 18.315 kW/kg.Keywords: activation energy, drying, effective moisture diffusivity, modelling, oven, shrimp
Procedia PDF Downloads 1866622 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon
Authors: Laura M. F. Bertens
Abstract:
There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone
Procedia PDF Downloads 1266621 Energy Use and Econometric Models of Soybean Production in Mazandaran Province of Iran
Authors: Majid AghaAlikhani, Mostafa Hojati, Saeid Satari-Yuzbashkandi
Abstract:
This paper studies energy use patterns and relationship between energy input and yield for soybean (Glycine max (L.) Merrill) in Mazandaran province of Iran. In this study, data were collected by administering a questionnaire in face-to-face interviews. Results revealed that the highest share of energy consumption belongs to chemical fertilizers (29.29%) followed by diesel (23.42%) and electricity (22.80%). Our investigations showed that a total energy input of 23404.1 MJ.ha-1 was consumed for soybean production. The energy productivity, specific energy, and net energy values were estimated as 0.12 kg MJ-1, 8.03 MJ kg-1, and 49412.71 MJ.ha-1, respectively. The ratio of energy outputs to energy inputs was 3.11. Obtained results indicated that direct, indirect, renewable and non-renewable energies were (56.83%), (43.17%), (15.78%) and (84.22%), respectively. Three econometric models were also developed to estimate the impact of energy inputs on yield. The results of econometric models revealed that impact of chemical, fertilizer, and water on yield were significant at 1% probability level. Also, direct and non-renewable energies were found to be rather high. Cost analysis revealed that total cost of soybean production per ha was around 518.43$. Accordingly, the benefit-cost ratio was estimated as 2.58. The energy use efficiency in soybean production was found as 3.11. This reveals that the inputs used in soybean production are used efficiently. However, due to higher rate of nitrogen fertilizer consumption, sustainable agriculture should be extended and extension staff could be proposed substitution of chemical fertilizer by biological fertilizer or green manure.Keywords: Cobbe Douglas function, economical analysis, energy efficiency, energy use patterns, soybean
Procedia PDF Downloads 3336620 Performance of Fiber-Reinforced Polymer as an Alternative Reinforcement
Authors: Salah E. El-Metwally, Marwan Abdo, Basem Abdel Wahed
Abstract:
Fiber-reinforced polymer (FRP) bars have been proposed as an alternative to conventional steel bars; hence, the use of these non-corrosive and nonmetallic reinforcing bars has increased in various concrete projects. This concrete material is lightweight, has a long lifespan, and needs minor maintenance; however, its non-ductile nature and weak bond with the surrounding concrete create a significant challenge. The behavior of concrete elements reinforced with FRP bars has been the subject of several experimental investigations, even with their high cost. This study aims to numerically assess the viability of using FRP bars, as longitudinal reinforcement, in comparison with traditional steel bars, and also as prestressing tendons instead of the traditional prestressing steel. The nonlinear finite element analysis has been utilized to carry out the current study. Numerical models have been developed to examine the behavior of concrete beams reinforced with FRP bars or tendons against similar models reinforced with either conventional steel or prestressing steel. These numerical models were verified by experimental test results available in the literature. The obtained results revealed that concrete beams reinforced with FRP bars, as passive reinforcement, exhibited less ductility and less stiffness than similar beams reinforced with steel bars. On the other hand, when FRP tendons are employed in prestressing concrete beams, the results show that the performance of these beams is similar to those beams prestressed by conventional active reinforcement but with a difference caused by the two tendon materials’ moduli of elasticity.Keywords: reinforced concrete, prestressed concrete, nonlinear finite element analysis, fiber-reinforced polymer, ductility
Procedia PDF Downloads 116619 Simulink Library for Reference Current Generation in Active DC Traction Substations
Authors: Mihaela Popescu, Alexandru Bitoleanu
Abstract:
This paper is focused on the reference current calculation in the compensation mode of the active DC traction substations. The so-called p-q theory of the instantaneous reactive power is used as theoretical foundation. The compensation goal of total compensation is taken into consideration for the operation under both sinusoidal and nonsinusoidal voltage conditions, through the two objectives of unity power factor and perfect harmonic cancelation. Four blocks of reference current generation implement the conceived algorithms and they are included in a specific Simulink library, which is useful in a DSP dSPACE-based platform working under Matlab/Simulink. The simulation results validate the correctness of the implementation and fulfillment of the compensation tasks.Keywords: active power filter, DC traction, p-q theory, Simulink library
Procedia PDF Downloads 6706618 Annual Water Level Simulation Using Support Vector Machine
Authors: Maryam Khalilzadeh Poshtegal, Seyed Ahmad Mirbagheri, Mojtaba Noury
Abstract:
In this paper, by application of the input yearly data of rainfall, temperature and flow to the Urmia Lake, the simulation of water level fluctuation were applied by means of three models. According to the climate change investigation the fluctuation of lakes water level are of high interest. This study investigate data-driven models, support vector machines (SVM), SVM method which is a new regression procedure in water resources are applied to the yearly level data of Lake Urmia that is the biggest and the hyper saline lake in Iran. The evaluated lake levels are found to be in good correlation with the observed values. The results of SVM simulation show better accuracy and implementation. The mean square errors, mean absolute relative errors and determination coefficient statistics are used as comparison criteria.Keywords: simulation, water level fluctuation, urmia lake, support vector machine
Procedia PDF Downloads 3656617 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin
Abstract:
Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.Keywords: gravity gradient, gravity gradient sensor, accelerometer, single-axis rotation modulation
Procedia PDF Downloads 3256616 NHS Tayside Plastic Surgery Induction Cheat Sheet and Video
Authors: Paul Holmes, Mike N. G.
Abstract:
Foundation-year doctors face increased stress, pressure and uncertainty when starting new rotations throughout their first years of work. This research questionnaire resulted in an induction cheat sheet and induction video that enhanced the Junior doctor's understanding of how to work effectively within the plastic surgery department at NHS Tayside. The objectives and goals were to improve the transition between cohorts of junior doctors in ward 26 at Ninewells Hospital. Before this quality improvement project, the induction pack was 74 pages long and over eight years old. With the support of consultant Mike Ng a new up-to-date induction was created. This involved a questionnaire and cheat sheet being developed. The questionnaire covered clerking, venipuncture, ward pharmacy, theatres, admissions, specialties on the ward, the cardiac arrest trolley, clinical emergencies, discharges and escalation. This audit has three completed cycles between August 2022 and August 2023. The cheat sheet developed a concise two-page A4 document designed for doctors to be able to reference easily and understand the essentials. The document format is a table containing ward layout; specialty; location; physician associate, shift patterns; ward rounds; handover location and time; hours coverage; senior escalation; nights; daytime duties, meetings/MDTs/board meetings, important bleeps and codes; department guidelines; boarders; referrals and patient stream; pharmacy; absences; rota coordinator; annual leave; top tips. The induction video is a 10-minute in-depth explanation of all aspects of the ward. The video explores in more depth the contents of the cheat sheet. This alternative visual format familiarizes the junior doctor with all aspects of the ward. These were provided to all foundation year 1 and 2 doctors on ward 26 at Ninewells Hospital at NHS Tayside Scotland. This work has since been adopted by the General Surgery Department, which extends to six further wards and has improved the effective handing over of the junior doctor’s role between cohorts. There is potential to further expand the cheat sheet to other departments as the concise document takes around 30 minutes to complete by a doctor who is currently on that ward. The time spent filling out the form provides vital information to the incoming junior doctors, which has a significant possibility to improve patient care.Keywords: induction, junior doctor, handover, plastic surgery
Procedia PDF Downloads 846615 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness
Authors: Melissa Angus Baboun
Abstract:
Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).Keywords: deafness, disability, news coverage, Caribbean newspapers
Procedia PDF Downloads 2326614 Postharvest Losses and Handling Improvement of Organic Pak-Choi and Choy Sum
Authors: Pichaya Poonlarp, Danai Boonyakiat, C. Chuamuangphan, M. Chanta
Abstract:
Current consumers’ behavior trends have changed towards more health awareness, the well-being of society and interest of nature and environment. The Royal Project Foundation is, therefore, well aware of organic agriculture. The project only focused on using natural products and utilizing its highland biological merits to increase resistance to diseases and insects for the produce grown. The project also brought in basic knowledge from a variety of available research information, including, but not limited to, improvement of soil fertility and a control of plant insects with biological methods in order to lay a foundation in developing and promoting farmers to grow quality produce with a high health safety. This will finally lead to sustainability for future highland agriculture and a decrease of chemical use on the highland area which is a source of natural watershed. However, there are still shortcomings of the postharvest management in term of quality and losses, such as bruising, rottenness, wilting and yellowish leaves. These losses negatively affect the maintenance and a shelf life of organic vegetables. Therefore, it is important that a research study of the appropriate and effective postharvest management is conducted for an individual organic vegetable to minimize product loss and find root causes of postharvest losses which would contribute to future postharvest management best practices. This can be achieved through surveys and data collection from postharvest processes in order to conduct analysis for causes of postharvest losses of organic pak-choi, baby pak-choi, and choy sum. Consequently, postharvest losses reduction strategies of organic vegetables can be achieved. In this study, postharvest losses of organic pak choi, baby pak-choi, and choy sum were determined at each stage of the supply chain starting from the field after harvesting, at the Development Center packinghouse, at Chiang Mai packinghouse, at Bangkok packing house and at the Royal Project retail shop in Chiang Mai. The results showed that postharvest losses of organic pak-choi, baby pak-choi, and choy sum were 86.05, 89.05 and 59.03 percent, respectively. The main factors contributing to losses of organic vegetables were due to mechanical damage and underutilized parts and/or short of minimum quality standard. Good practices had been developed after causes of losses were identified. Appropriate postharvest handling and management, for example, temperature control, hygienic cleaning, and reducing the duration of the supply chain, postharvest losses of all organic vegetables should be able to remarkably reduced postharvest losses in the supply chain.Keywords: postharvest losses, organic vegetables, handling improvement, shelf life, supply chain
Procedia PDF Downloads 4746613 The Theory of Relativity (K)
Authors: Igor Vladimirovich Kuzminov
Abstract:
The proposed article is an alternative version of the Theory of Relativity. The version is based on the concepts of classical Newtonian physics and does not deny the existing calculation base. The proposed theory completely denies Einstein's existing Theory of Relativity. The only thing that connects these theories is that the proposed theory is also built on postulates. The proposed theory is intended to establish the foundation of classical Newtonian physics. The proposed theory is intended to establish continuity in the development of the fundamentals of physics and is intended to eliminate all kinds of speculation in explanations of physical phenomena. An example of such speculation is Einstein's Theory of Relativity (E).Keywords: the theory of relativity, postulates of the theory of relativity, criticism of Einstein's theory, classical physics
Procedia PDF Downloads 496612 LACGC: Business Sustainability Research Model for Generations Consumption, Creation, and Implementation of Knowledge: Academic and Non-Academic
Authors: Satpreet Singh
Abstract:
This paper introduces the new LACGC model to sustain the academic and non-academic business to future educational and organizational generations. The consumption of knowledge and the creation of new knowledge is a strength and focal interest of all academics and Non-academic organizations. Implementing newly created knowledge sustains the businesses to the next generation with growth without detriment. Existing models like the Scholar-practitioner model and Organization knowledge creation models focus specifically on academic or non-academic, not both. LACGC model can be used for both Academic and Non-academic at the domestic or international level. Researchers and scholars play a substantial role in finding literature and practice gaps in academic and non-academic disciplines. LACGC model has unrestricted the number of recurrences because the Consumption, Creation, and implementation of new ideas, disciplines, systems, and knowledge is a never-ending process and must continue from one generation to the next.Keywords: academics, consumption, creation, generations, non-academics, research, sustainability
Procedia PDF Downloads 196