Search results for: surrogate models
4931 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran
Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard
Abstract:
Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.Keywords: data mining, ischemic stroke, decision tree, Bayesian network
Procedia PDF Downloads 1744930 Factors Influencing Soil Organic Carbon Storage Estimation in Agricultural Soils: A Machine Learning Approach Using Remote Sensing Data Integration
Authors: O. Sunantha, S. Zhenfeng, S. Phattraporn, A. Zeeshan
Abstract:
The decline of soil organic carbon (SOC) in global agriculture is a critical issue requiring rapid and accurate estimation for informed policymaking. While it is recognized that SOC predictors vary significantly when derived from remote sensing data and environmental variables, identifying the specific parameters most suitable for accurately estimating SOC in diverse agricultural areas remains a challenge. This study utilizes remote sensing data to precisely estimate SOC and identify influential factors in diverse agricultural areas, such as paddy, corn, sugarcane, cassava, and perennial crops. Extreme gradient boosting (XGBoost), random forest (RF), and support vector regression (SVR) models are employed to analyze these factors' impact on SOC estimation. The results show key factors influencing SOC estimation include slope, vegetation indices (EVI), spectral reflectance indices (red index, red edge2), temperature, land use, and surface soil moisture, as indicated by their averaged importance scores across XGBoost, RF, and SVR models. Therefore, using different machine learning algorithms for SOC estimation reveals varying influential factors from remote sensing data and environmental variables. This approach emphasizes feature selection, as different machine learning algorithms identify various key factors from remote sensing data and environmental variables for accurate SOC estimation.Keywords: factors influencing SOC estimation, remote sensing data, environmental variables, machine learning
Procedia PDF Downloads 354929 Prediction For DC-AC PWM Inverters DC Pulsed Current Sharing From Passive Parallel Battery-Supercapacitor Energy Storage Systems
Authors: Andreas Helwig, John Bell, Wangmo
Abstract:
Hybrid energy storage systems (HESS) are gaining popularity for grid energy storage (ESS) driven by the increasingly dynamic nature of energy demands, requiring both high energy and high power density. Particularly the ability of energy storage systems via inverters to respond to increasing fluctuation in energy demands, the combination of lithium Iron Phosphate (LFP) battery and supercapacitor (SC) is a particular example of complex electro-chemical devices that may provide benefit to each other for pulse width modulated DC to AC inverter application. This is due to SC’s ability to respond to instantaneous, high-current demands and batteries' long-term energy delivery. However, there is a knowledge gap on the current sharing mechanism within a HESS supplying a load powered by high-frequency pulse-width modulation (PWM) switching to understand the mechanism of aging in such HESS. This paper investigates the prediction of current utilizing various equivalent circuits for SC to investigate sharing between battery and SC in MATLAB/Simulink simulation environment. The findings predict a significant reduction of battery current when the battery is used in a hybrid combination with a supercapacitor as compared to a battery-only model. The impact of PWM inverter carrier switching frequency on current requirements was analyzed between 500Hz and 31kHz. While no clear trend emerged, models predicted optimal frequencies for minimized current needs.Keywords: hybrid energy storage, carrier frequency, PWM switching, equivalent circuit models
Procedia PDF Downloads 264928 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron
Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni
Abstract:
The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.Keywords: bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow
Procedia PDF Downloads 3444927 Nature of Forest Fragmentation Owing to Human Population along Elevation Gradient in Different Countries in Hindu Kush Himalaya Mountains
Authors: Pulakesh Das, Mukunda Dev Behera, Manchiraju Sri Ramachandra Murthy
Abstract:
Large numbers of people living in and around the Hindu Kush Himalaya (HKH) region, depends on this diverse mountainous region for ecosystem services. Following the global trend, this region also experiencing rapid population growth, and demand for timber and agriculture land. The eight countries sharing the HKH region have different forest resources utilization and conservation policies that exert varying forces in the forest ecosystem. This created a variable spatial as well altitudinal gradient in rate of deforestation and corresponding forest patch fragmentation. The quantitative relationship between fragmentation and demography has not been established before for HKH vis-à-vis along elevation gradient. This current study was carried out to attribute the overall and different nature in landscape fragmentations along the altitudinal gradient with the demography of each sharing countries. We have used the tree canopy cover data derived from Landsat data to analyze the deforestation and afforestation rate, and corresponding landscape fragmentation observed during 2000 – 2010. Area-weighted mean radius of gyration (AMN radius of gyration) was computed owing to its advantage as spatial indicator of fragmentation over non-spatial fragmentation indices. Using the subtraction method, the change in fragmentation was computed during 2000 – 2010. Using the tree canopy cover data as a surrogate of forest cover, highest forest loss was observed in Myanmar followed by China, India, Bangladesh, Nepal, Pakistan, Bhutan, and Afghanistan. However, the sequence of fragmentation was different after the maximum fragmentation observed in Myanmar followed by India, China, Bangladesh, and Bhutan; whereas increase in fragmentation was seen following the sequence of as Nepal, Pakistan, and Afghanistan. Using SRTM-derived DEM, we observed higher rate of fragmentation up to 2400m that corroborated with high human population for the year 2000 and 2010. To derive the nature of fragmentation along the altitudinal gradients, the Statistica software was used, where the user defined function was utilized for regression applying the Gauss-Newton estimation method with 50 iterations. We observed overall logarithmic decrease in fragmentation change (area-weighted mean radius of gyration), forest cover loss and population growth during 2000-2010 along the elevation gradient with very high R2 values (i.e., 0.889, 0.895, 0.944 respectively). The observed negative logarithmic function with the major contribution in the initial elevation gradients suggest to gap filling afforestation in the lower altitudes to enhance the forest patch connectivity. Our finding on the pattern of forest fragmentation and human population across the elevation gradient in HKH region will have policy level implication for different nations and would help in characterizing hotspots of change. Availability of free satellite derived data products on forest cover and DEM, grid-data on demography, and utility of geospatial tools helped in quick evaluation of the forest fragmentation vis-a-vis human impact pattern along the elevation gradient in HKH.Keywords: area-weighted mean radius of gyration, fragmentation, human impact, tree canopy cover
Procedia PDF Downloads 2154926 Supply Chain Design: Criteria Considered in Decision Making Process
Authors: Lenka Krsnakova, Petr Jirsak
Abstract:
Prior research on facility location in supply chain is mostly focused on improvement of mathematical models. It is due to the fact that supply chain design has been for the long time the area of operational research that underscores mainly quantitative criteria. Qualitative criteria are still highly neglected within the supply chain design research. Facility location in the supply chain has become multi-criteria decision-making problem rather than single criteria decision due to changes of market conditions. Thus, both qualitative and quantitative criteria have to be included in the decision making process. The aim of this study is to emphasize the importance of qualitative criteria as key parameters of relevant mathematical models. We examine which criteria are taken into consideration when Czech companies decide about their facility location. A literature review on criteria being used in facility location decision making process creates a theoretical background for the study. The data collection was conducted through questionnaire survey. Questionnaire was sent to manufacturing and business companies of all sizes (small, medium and large enterprises) with the representation in the Czech Republic within following sectors: automotive, toys, clothing industry, electronics and pharmaceutical industry. Comparison of which criteria prevail in the current research and which are considered important by companies in the Czech Republic is made. Despite the number of articles focused on supply chain design, only minority of them consider qualitative criteria and rarely process supply chain design as a multi-criteria decision making problem. Preliminary results of the questionnaire survey outlines that companies in the Czech Republic see the qualitative criteria and their impact on facility location decision as crucial. Qualitative criteria as company strategy, quality of working environment or future development expectations are confirmed to be considered by Czech companies. This study confirms that the qualitative criteria can significantly influence whether a particular location could or could not be right place for a logistic facility. The research has two major limitations: researchers who focus on improving of mathematical models mostly do not mention criteria that enter the model. Czech supply chain managers selected important criteria from the group of 18 available criteria and assign them importance weights. It does not necessarily mean that these criteria were taken into consideration when the last facility location was chosen, but how they perceive that today. Since the study confirmed the necessity of future research on how qualitative criteria influence decision making process about facility location, the authors have already started in-depth interviews with participating companies to reveal how the inclusion of qualitative criteria into decision making process about facility location influence the company´s performance.Keywords: criteria influencing facility location, Czech Republic, facility location decision-making, qualitative criteria
Procedia PDF Downloads 3264925 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea
Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti
Abstract:
This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool. Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey
Procedia PDF Downloads 2364924 Local Energy and Flexibility Markets to Foster Demand Response Services within the Energy Community
Authors: Eduardo Rodrigues, Gisela Mendes, José M. Torres, José E. Sousa
Abstract:
In the sequence of the liberalisation of the electricity sector a progressive engagement of consumers has been considered and targeted by sector regulatory policies. With the objective of promoting market competition while protecting consumers interests, by transferring some of the upstream benefits to the end users while reaching a fair distribution of system costs, different market models to value consumers’ demand flexibility at the energy community level are envisioned. Local Energy and Flexibility Markets (LEFM) involve stakeholders interested in providing or procure local flexibility for community, services and markets’ value. Under the scope of DOMINOES, a European research project supported by Horizon 2020, the local market concept developed is expected to: • Enable consumers/prosumers empowerment, by allowing them to value their demand flexibility and Distributed Energy Resources (DER); • Value local liquid flexibility to support innovative distribution grid management, e.g., local balancing and congestion management, voltage control and grid restoration; • Ease the wholesale market uptake of DER, namely small-scale flexible loads aggregation as Virtual Power Plants (VPPs), facilitating Demand Response (DR) service provision; • Optimise the management and local sharing of Renewable Energy Sources (RES) in Medium Voltage (MV) and Low Voltage (LV) grids, trough energy transactions within an energy community; • Enhance the development of energy markets through innovative business models, compatible with ongoing policy developments, that promote the easy access of retailers and other service providers to the local markets, allowing them to take advantage of communities’ flexibility to optimise their portfolio and subsequently their participation in external markets. The general concept proposed foresees a flow of market actions, technical validations, subsequent deliveries of energy and/or flexibility and balance settlements. Since the market operation should be dynamic and capable of addressing different requests, either prioritising balancing and prosumer services or system’s operation, direct procurement of flexibility within the local market must also be considered. This paper aims to highlight the research on the definition of suitable DR models to be used by the Distribution System Operator (DSO), in case of technical needs, and by the retailer, mainly for portfolio optimisation and solve unbalances. The models to be proposed and implemented within relevant smart distribution grid and microgrid validation environments, are focused on day-ahead and intraday operation scenarios, for predictive management and near-real-time control respectively under the DSO’s perspective. At local level, the DSO will be able to procure flexibility in advance to tackle different grid constrains (e.g., demand peaks, forecasted voltage and current problems and maintenance works), or during the operating day-to-day, to answer unpredictable constraints (e.g., outages, frequency deviations and voltage problems). Due to the inherent risks of their active market participation retailers may resort to DR models to manage their portfolio, by optimising their market actions and solve unbalances. The interaction among the market actors involved in the DR activation and in flexibility exchange is explained by a set of sequence diagrams for the DR modes of use from the DSO and the energy provider perspectives. • DR for DSO’s predictive management – before the operating day; • DR for DSO’s real-time control – during the operating day; • DR for retailer’s day-ahead operation; • DR for retailer’s intraday operation.Keywords: demand response, energy communities, flexible demand, local energy and flexibility markets
Procedia PDF Downloads 994923 Implementation of Conceptual Real-Time Embedded Functional Design via Drive-By-Wire ECU Development
Authors: Ananchai Ukaew, Choopong Chauypen
Abstract:
Design concepts of real-time embedded system can be realized initially by introducing novel design approaches. In this literature, model based design approach and in-the-loop testing were employed early in the conceptual and preliminary phase to formulate design requirements and perform quick real-time verification. The design and analysis methodology includes simulation analysis, model based testing, and in-the-loop testing. The design of conceptual drive-by-wire, or DBW, algorithm for electronic control unit, or ECU, was presented to demonstrate the conceptual design process, analysis, and functionality evaluation. The concepts of DBW ECU function can be implemented in the vehicle system to improve electric vehicle, or EV, conversion drivability. However, within a new development process, conceptual ECU functions and parameters are needed to be evaluated. As a result, the testing system was employed to support conceptual DBW ECU functions evaluation. For the current setup, the system components were consisted of actual DBW ECU hardware, electric vehicle models, and control area network or CAN protocol. The vehicle models and CAN bus interface were both implemented as real-time applications where ECU and CAN protocol functionality were verified according to the design requirements. The proposed system could potentially benefit in performing rapid real-time analysis of design parameters for conceptual system or software algorithm development.Keywords: drive-by-wire ECU, in-the-loop testing, model-based design, real-time embedded system
Procedia PDF Downloads 3494922 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal
Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali
Abstract:
The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management
Procedia PDF Downloads 814921 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost
Procedia PDF Downloads 94920 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach
Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma
Abstract:
Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX
Procedia PDF Downloads 1304919 Formation of an Empire in the 21st Century: Theoretical Approach in International Relations and a Worldview of the New World Order
Authors: Rami Georg Johann
Abstract:
Against the background of the current geopolitical constellations, the author looks at various empire models, which are discussed and compared with each other with regard to their stability and functioning. The focus is on the fifth concept as a possible new world order in the 21st century. These will be discussed and compared to one another according to their stability and functioning. All empires to be designed will be conceptualised based on one, two, three, four, and five worlds. All worlds are made up of a different constellation of states and relating coalitions. All systems will be discussed in detail. The one-world-system, the“Western Empire,” will be presented as a possible solution to a new world order in the 21st century (fifth concept). The term “Western” in “Western Empire” describes the Western concept after World War II. This Western concept was the result of two horrible world wars in the 20th century.” With this in mind, the fifth concept forms a stable empire system, the “Western Empire,” by political measures tied to two issues. Thus, this world order provides a significantly higher long-term stability in contrast to all other empire models (comprising five, four, three, or two worlds). Confrontations and threats of war are reduced to a minimum. The two issues mentioned are “merger” and “competition.” These are the main differences in forming an empire compared to all empires and realms in the history of mankind. The fifth concept of this theory, the “Western Empire,” acts explicitly as a counter model. The Western Empire (fifth concept) is formed by the merger of world powers without war. Thus, a world order without competition is created. This merged entity secures long-term peace, stability, democratic values, freedom, human rights, equality, and justice in the new world order.Keywords: empire formation, theory of international relations, Western Empire, world order
Procedia PDF Downloads 1504918 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 314917 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast
Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef
Abstract:
This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast
Procedia PDF Downloads 1324916 Educators’ Adherence to Learning Theories and Their Perceptions on the Advantages and Disadvantages of E-Learning
Authors: Samson T. Obafemi, Seraphin D. Eyono-Obono
Abstract:
Information and Communication Technologies (ICTs) are pervasive nowadays, including in education where they are expected to improve the performance of learners. However, the hope placed in ICTs to find viable solutions to the problem of poor academic performance in schools in the developing world has not yet yielded the expected benefits. This problem serves as a motivation to this study whose aim is to examine the perceptions of educators on the advantages and disadvantages of e-learning. This aim will be subdivided into two types of research objectives. Objectives on the identification and design of theories and models will be achieved using content analysis and literature review. However, the objective on the empirical testing of such theories and models will be achieved through the survey of educators from different schools in the Pinetown District of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyse the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after assessing the validity and the reliability of the data. The main hypothesis driving this study is that there is a relationship between the demographics of educators’ and their adherence to learning theories on one side, and their perceptions on the advantages and disadvantages of e-learning on the other side, as argued by existing research; but this research views these learning theories under three perspectives: educators’ adherence to self-regulated learning, to constructivism, and to progressivism. This hypothesis was fully confirmed by the empirical study except for the demographic factor where teachers’ level of education was found to be the only demographic factor affecting the perceptions of educators on the advantages and disadvantages of e-learning.Keywords: academic performance, e-learning, learning theories, teaching and learning
Procedia PDF Downloads 2734915 Effectiveness of Parent Coaching Intervention for Parents of Children with Developmental Disabilities in the Home and Community
Authors: Elnaz Alimi, Keriakoula Andriopoulos, Sam Boyer, Weronika Zuczek
Abstract:
Occupational therapists can use coaching strategies to guide parents in providing therapy for their children with developmental disabilities. Evidence from various fields has shown increased parental self-efficacy and positive child outcomes as benefits of home and community-based parent coaching models. A literature review was conducted to investigate the effectiveness of parent coaching interventions delivered in home and community settings for children with developmental disabilities ages 0-12, on a variety of parent and child outcomes. CINAHL Plus, PsycINFO, PubMed, OTseeker were used as databases. The inclusion criteria consisted of: children with developmental disabilities ages 0-12 and their parents, parent coaching models conducted in the home and community, and parent and child outcomes. Studies were excluded if they were in a language other than English and published before 2000. Results showed that parent coaching interventions led to more positive therapy outcomes in child behaviors and symptoms related to their diagnosis or disorder. Additionally, coaching strategies had positive effects on parental satisfaction with therapy, parental self-efficacy, and family dynamics. Findings revealed decreased parental stress and improved parent-child relationships. Further research on parent coaching could involve studying the feasibility of coaching within occupational therapy specifically, incorporating cultural elements into coaching, qualitative studies on parental satisfaction with coaching, and measuring the quality of life outcomes for the whole family.Keywords: coaching model, developmental disabilities, occupational therapy, pediatrics
Procedia PDF Downloads 1944914 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for near Field Earthquakes
Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi
Abstract:
Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. And, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete (R/C) frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis (pushover analysis) in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.Keywords: nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes
Procedia PDF Downloads 4044913 Predictive Analytics of Bike Sharing Rider Parameters
Authors: Bongs Lainjo
Abstract:
The evolution and escalation of bike-sharing programs (BSP) continue unabated. Since the sixties, many countries have introduced different models and strategies of BSP. These include variations ranging from dockless models to electronic real-time monitoring systems. Reasons for using this BSP include recreation, errands, work, etc. And there is all indication that complex, and more innovative rider-friendly systems are yet to be introduced. The objective of this paper is to analyze current variables established by different operators and streamline them identifying the most compelling ones using analytics. Given the contents of available databases, there is a lack of uniformity and common standard on what is required and what is not. Two factors appear to be common: user type (registered and unregistered, and duration of each trip). This article uses historical data provided by one operator based in the greater Washington, District of Columbia, USA area. Several variables including categorical and continuous data types were screened. Eight out of 18 were considered acceptable and significantly contribute to determining a useful and reliable predictive model. Bike-sharing systems have become popular in recent years all around the world. Although this trend has resulted in many studies on public cycling systems, there have been few previous studies on the factors influencing public bicycle travel behavior. A bike-sharing system is a computer-controlled system in which individuals can borrow bikes for a fee or free for a limited period. This study has identified unprecedented useful, and pragmatic parameters required in improving BSP ridership dynamics.Keywords: sharing program, historical data, parameters, ridership dynamics, trip duration
Procedia PDF Downloads 1384912 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 2564911 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 804910 Degradation of Heating, Ventilation, and Air Conditioning Components across Locations
Authors: Timothy E. Frank, Josh R. Aldred, Sophie B. Boulware, Michelle K. Cabonce, Justin H. White
Abstract:
Materials degrade at different rates in different environments depending on factors such as temperature, aridity, salinity, and solar radiation. Therefore, predicting asset longevity depends, in part, on the environmental conditions to which the asset is exposed. Heating, ventilation, and air conditioning (HVAC) systems are critical to building operations yet are responsible for a significant proportion of their energy consumption. HVAC energy use increases substantially with slight operational inefficiencies. Understanding the environmental influences on HVAC degradation in detail will inform maintenance schedules and capital investment, reduce energy use, and increase lifecycle management efficiency. HVAC inspection records spanning 14 years from 21 locations across the United States were compiled and associated with the climate conditions to which they were exposed. Three environmental features were explored in this study: average high temperature, average low temperature, and annual precipitation, as well as four non-environmental features. Initial insights showed no correlations between individual features and the rate of HVAC component degradation. Using neighborhood component analysis, however, the most critical features related to degradation were identified. Two models were considered, and results varied between them. However, longitude and latitude emerged as potentially the best predictors of average HVAC component degradation. Further research is needed to evaluate additional environmental features, increase the resolution of the environmental data, and develop more robust models to achieve more conclusive results.Keywords: climate, degradation, HVAC, neighborhood component analysis
Procedia PDF Downloads 4314909 Additive Manufacturing of Titanium Metamaterials for Tissue Engineering
Authors: Tuba Kizilirmak
Abstract:
Distinct properties of porous metamaterials have been largely processed for biomedicine requiring a three-dimensional (3D) porous structure engaged with fine mechanical features, biodegradation ability, and biocompatibility. Applications of metamaterials are (i) porous orthopedic and dental implants; (ii) in vitro cell culture of metamaterials and bone regeneration of metamaterials in vivo; (iii) macro-, micro, and nano-level porous metamaterials for sensors, diagnosis, and drug delivery. There are some specific properties to design metamaterials for tissue engineering. These are surface to volume ratio, pore size, and interconnection degrees are selected to control cell behavior and bone ingrowth. In this study, additive manufacturing technique selective laser melting will be used to print the scaffolds. Selective Laser Melting prints the 3D components according to designed 3D CAD models and manufactured materials, adding layers progressively by layer. This study aims to design metamaterials with Ti6Al4V material, which gives benefit in respect of mechanical and biological properties. Ti6Al4V scaffolds will support cell attachment by conferring a suitable area for cell adhesion. This study will control the osteoblast cell attachment on Ti6Al4V scaffolds after the determination of optimum stiffness and other mechanical properties which are close to mechanical properties of bone. Before we produce the samples, we will use a modeling technique to simulate the mechanical behavior of samples. These samples include different lattice models with varying amounts of porosity and density.Keywords: additive manufacturing, titanium lattices, metamaterials, porous metals
Procedia PDF Downloads 1944908 Finite Element Molecular Modeling: A Structural Method for Large Deformations
Authors: A. Rezaei, M. Huisman, W. Van Paepegem
Abstract:
Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.Keywords: finite element, large deformation, molecular mechanics, structural method
Procedia PDF Downloads 1524907 Presuppositions and Implicatures in Four Selected Speeches of Osama Bin Laden's Legitimisation of 'Jihad'
Authors: Sawsan Al-Saaidi, Ghayth K. Shaker Al-Shaibani
Abstract:
This paper investigates certain linguistics properties of four selected speeches by Al-Qaeda’s former leader Osama bin Laden who legitimated the use of jihad by Muslims in various countries when he was alive. The researchers adopt van Dijk’s (2009; 1998) Socio-Cognitive approach and Ideological Square theory respectively. Socio-Cognitive approach revolves around various cognitive, socio-political, and discursive aspects that can be found in political discourse as in Osama bin Laden’s one. The political discourse can be defined in terms of textual properties and contextual models. Pertaining to the ideological square, it refers to positive self-presentation and negative other-presentation which help to enhance the textual and contextual analyses. Therefore, among the most significant properties in Osama bin Laden’s discourse are the use of presuppositions and implicatures which are based on background knowledge and contextual models as well. Thus, the paper concludes that Osama bin Laden used a number of manipulative strategies which augmented and embellished the use of ‘jihad’ in order to develop a more effective discourse for his audience. In addition, the findings have revealed that bin Laden used different implicit and embedded interpretations of different topics which have been accepted as taken-for-granted truths for him to legitimate Jihad against his enemies. There are many presuppositions in the speeches analysed that result in particular common-sense assumptions and a world-view about the selected speeches. More importantly, the assumptions in the analysed speeches help consolidate the ideological analysis in terms of in-group and out-group members.Keywords: Al-Qaeda, cognition, critical discourse analysis, Osama Bin Laden, jihad, implicature, legitimisation, presupposition, political discourse
Procedia PDF Downloads 2394906 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit
Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey
Abstract:
Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D
Procedia PDF Downloads 1834905 A Comparative Study on Deep Learning Models for Pneumonia Detection
Authors: Hichem Sassi
Abstract:
Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.Keywords: deep learning, computer vision, pneumonia, models, comparative study
Procedia PDF Downloads 644904 Exploring the Potential of Bio-Inspired Lattice Structures for Dynamic Applications in Design
Authors: Axel Thallemer, Aleksandar Kostadinov, Abel Fam, Alex Teo
Abstract:
For centuries, the forming processes in nature served as a source of inspiration for both architects and designers. It seems as most human artifacts are based on ideas which stem from the observation of the biological world and its principles of growth. As a fact, in the cultural history of Homo faber, materials have been mostly used in their solid state: From hand axe to computer mouse, the principle of employing matter has not changed ever since the first creation. In the scope of history only recently and by the help of additive-generative fabrication processes through Computer Aided Design (CAD), designers were enabled to deconstruct solid artifacts into an outer skin and an internal lattice structure. The intention behind this approach is to create a new topology which reduces resources and integrates functions into an additively manufactured component. However, looking at the currently employed lattice structures, it is very clear that those lattice structure geometries have not been thoroughly designed, but rather taken out of basic-geometry libraries which are usually provided by the CAD. In the here presented study, a group of 20 industrial design students created new and unique lattice structures using natural paragons as their models. The selected natural models comprise both the animate and inanimate world, with examples ranging from the spiraling of narwhal tusks, off-shooting of mangrove roots, minimal surfaces of soap bubbles, up to the rhythmical arrangement of molecular geometry, like in the case of SiOC (Carbon-Rich Silicon Oxicarbide). This ideation process leads to a design of a geometric cell, which served as a basic module for the lattice structure, whereby the cell was created in visual analogy to its respective natural model. The spatial lattices were fabricated additively in mostly [X]3 by [Y]3 by [Z]3 units’ volumes using selective powder bed melting in polyamide with (z-axis) 50 mm and 100 µm resolution and subdued to mechanical testing of their elastic zone in a biomedical laboratory. The results demonstrate that additively manufactured lattice structures can acquire different properties when they are designed in analogy to natural models. Several of the lattices displayed the ability to store and return kinetic energy, while others revealed a structural failure which can be exploited for purposes where a controlled collapse of a structure is required. This discovery allows for various new applications of functional lattice structures within industrially created objects.Keywords: bio-inspired, biomimetic, lattice structures, additive manufacturing
Procedia PDF Downloads 1484903 Effect of Nicotine on the Reinforcing Effects of Cocaine in a Nonhuman Primate Model of Drug Use
Authors: Mia I. Allen, Bernard N. Johnson, Gagan Deep, Yixin Su, Sangeeta Singth, Ashish Kumar, , Michael A. Nader
Abstract:
With no FDA-approved treatments for cocaine use disorders (CUD), research has focused on the behavioral and neuropharmacological effects of cocaine in animal models, with the goal of identifying novel interventions. While the majority of people with CUD also use tobacco/nicotine, the majority of preclinical cocaine research does not include the co-use of nicotine. The present study examined nicotine and cocaine co-use under several conditions of intravenous drug self-administration in monkeys. In Experiment 1, male rhesus monkeys (N=3) self-administered cocaine (0.001-0.1 mg/kg/injection) alone and cocaine+nicotine (0.01-0.03 mg/kg/injection) under a progressive-ratio schedule of reinforcement. When nicotine was added to cocaine, there was a significant leftward shift and significant increase in peak break point. In Experiment 2, socially housed female and male cynomolgus monkeys (N=14) self-administered cocaine under a concurrent drug-vs-food choice schedule. Combining nicotine significantly decreased cocaine choice ED50 values (i.e., shifted the cocaine dose-response curve to the left) in females but not in males. There was no evidence of social rank differences. In delay discounting studies, the co-use of nicotine and cocaine required significantly larger delays to the preferred drug reinforcer to reallocate choice compared with cocaine alone. Overall, these results suggest drug interactions of nicotine and cocaine co-use is not simply a function of potency but rather a fundamentally distinctive condition that should be utilized to better understand the neuropharmacology of CUD and the evaluation of potential treatments.Keywords: polydrug use, animal models, nonhuman primates, behavioral pharmacology, drug self-administration
Procedia PDF Downloads 874902 The Association between C-Reactive Protein and Hypertension with Different US Participants Ethnicity-Findings from National Health and Nutrition Examination Survey 1999-2010
Authors: Ghada Abo-Zaid
Abstract:
The main objective of this study was to examine the association between the elevated level of CRP and incidence of hypertension before and after adjusting by age, BMI, gender, SES, smoking, diabetes, cholesterol LDL and cholesterol HDL and to determine whether the association were differ by race. Method: Cross sectional data for participations from age 17 to age 74 years who included in The National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010 were analysed. CRP level was classified into three categories ( > 3mg/L, between 1mg/LL and 3mg/L, and < 3 mg/L). Blood pressure categorization was done using JNC 7 algorithm Hypertension defined as either systolic blood pressure (SBP) of 140 mmHg or more and disystolic blood pressure (DBP) of 90mmHg or greater, otherwise a self-reported prior diagnosis by a physician. Pre-hypertension was defined as (139 > SBP > 120 or 89 > DPB > 80). Multinominal regression model was undertaken to measure the association between CRP level and hypertension. Results: In univariable models, CRP concentrations > 3 mg/L were associated with a 73% greater risk of incident hypertension compared with CRP concentrations < 1 mg/L (Hypertension: odds ratio [OR] = 1.73; 95% confidence interval [CI], 1.50-1.99). Ethnic comparisons showed that American Mexican had the highest risk of incident hypertension (odds ratio [OR] = 2.39; 95% confidence interval [CI], 2.21-2.58).This risk was statistically insignificant, however, either after controlling by other variables (Hypertension: OR = 0.75; 95% CI, 0.52-1.08,), or categorized by race [American Mexican: odds ratio [OR] = 1.58; 95% confidence interval [CI], 0,58-4.26, Other Hispanic: odds ratio [OR] = 0.87; 95% confidence interval [CI], 0.19-4.42, Non-Hispanic white: odds ratio [OR] = 0.90; 95% confidence interval [CI], 0.50-1.59, Non-Hispanic Black: odds ratio [OR] = 0.44; 95% confidence interval [CI], 0.22-0,87]. The same results were found for pre-hypertension, and the Non-Hispanic black showed the highest significant risk for Pre-Hypertension (odds ratio [OR] = 1.60; 95% confidence interval [CI], 1.26-2.03). When CRP concentrations were between 1.0-3.0 mg/L, in an unadjusted models prehypertension was associated with higher likelihood of elevated CRP (OR = 1.37; 95% CI, 1.15-1.62). The same relationship was maintained in Non-Hispanic white, Non-Hispanic black, and other race (Non-Hispanic white: OR = 1.24; 95% CI, 1.03-1.48, Non-Hispanic black: OR = 1.60; 95% CI, 1.27-2.03, other race: OR = 2.50; 95% CI, 1.32-4.74) while the association was insignificant with American Mexican and other Hispanic. In the adjusted model, the relationship between CRP and prehypertension were no longer available. In contrary, Hypertension was not independently associated with elevated CRP, and the results were the same after grouped by race or adjusted by the confounder variables. The same results were obtained when SBP or DBP were on a continuous measure. Conclusions: This study confirmed the existence of an association between hypertension, prehypertension and elevated level of CRP, however this association was no longer available after adjusting by other variables. Ethic group differences were statistically significant at the univariable models, while it disappeared after controlling by other variables.Keywords: CRP, hypertension, ethnicity, NHANES, blood pressure
Procedia PDF Downloads 414