Search results for: classifiers accuracy
443 Climate Changes Impact on Artificial Wetlands
Authors: Carla Idely Palencia-Aguilar
Abstract:
Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.Keywords: DEM, evapotranspiration, geostatistics, NDVI
Procedia PDF Downloads 121442 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival
Procedia PDF Downloads 302441 Creation of a Clinical Tool for Diagnosis and Treatment of Skin Disease in HIV Positive Patients in Malawi
Authors: Alice Huffman, Joseph Hartland, Sam Gibbs
Abstract:
Dermatology is often a neglected specialty in low-resource settings, despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV positive patients. African countries have the highest HIV infection rates and skin conditions are frequently misdiagnosed and mismanaged, because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV positive patients. A literature search within Embase, Medline and Google scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff a list of 15 skin conditions was included and a booklet created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff, alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.Keywords: dermatology, HIV, Malawi, skin disease
Procedia PDF Downloads 206440 Diagnostic Value of CT Scan in Acute Appendicitis
Authors: Maria Medeiros, Suren Surenthiran, Abitha Muralithar, Soushma Seeburuth, Mohammed Mohammed
Abstract:
Introduction: Appendicitis is the most common surgical emergency globally and can have devastating consequences. Diagnostic imaging in acute appendicitis has become increasingly common in aiding the diagnosis of acute appendicitis. Computerized tomography (CT) and ultrasound (US) are the most commonly used imaging modalities for diagnosing acute appendicitis. Pre-operative imaging has contributed to a reduction of negative appendicectomy rates from between 10-29% to 5%. Literature report CT scan has a diagnostic sensitivity of 94% in acute appendicitis. This clinical audit was conducted to establish if the CT scan's diagnostic yield for acute appendicitis matches the literature. CT scan has a high sensitivity and specificity for diagnosing acute appendicitis and its use can result in a lower negative appendicectomy rate. The aim of this study is to compare the pre-operative imaging findings from CT scans to the histopathology results post-operatively and establish the accuracy of CT scans in aiding the diagnosis of acute appendicitis. Methods: This was a retrospective study focusing on adult presentations to the general surgery department in a district general hospital in central London with an impression of acute appendicitis. We analyzed all patients from July 2022 to December 2022 who underwent a CT scan preceding appendicectomy. Pre-operative CT findings and post-operative histopathology findings were compared to establish the efficacy of CT scans in diagnosing acute appendicitis. Our results were also cross-referenced with pre-existing literature. Data was collected and anonymized using CERNER and analyzed in Microsoft Excel. Exclusion criteria: Children, age <16. Results: 65 patients had CT scans in which the report stated acute appendicitis. Of those 65 patients, 62 patients underwent diagnostic laparoscopies. 100% of patients who underwent an appendicectomy with a pre-operative CT scan showing acute appendicitis had acute appendicitis in histopathology analysis. 3 of the 65 patients who had a CT scan showing appendicitis received conservative treatment. Conclusion: CT scans positive for acute appendicitis had 100% sensitivity and a positive predictive value, which matches published research studies (sensitivity of 94%). The use of CT scans in the diagnostic work-up for acute appendicitis can be extremely helpful in a) confirming the diagnosis and b) reducing the rates of negative appendicectomies and consequently reducing unnecessary operative-associated risks for patients, reducing costs and reducing pressure on emergency theatre lists.Keywords: acute apendicitis, CT scan, general surgery, imaging
Procedia PDF Downloads 94439 Drug Therapy Problem and Its Contributing Factors among Pediatric Patients with Infectious Diseases Admitted to Jimma University Medical Center, South West Ethiopia: Prospective Observational Study
Authors: Desalegn Feyissa Desu
Abstract:
Drug therapy problem is a significant challenge to provide high quality health care service for the patients. It is associated with morbidity, mortality, increased hospital stay, and reduced quality of life. Moreover, pediatric patients are quite susceptible to drug therapy problems. Thus this study aimed to assess drug therapy problem and its contributing factors among pediatric patients diagnosed with infectious disease admitted to pediatric ward of Jimma university medical center, from April 1 to June 30, 2018. Prospective observational study was conducted among pediatric patients with infectious disease admitted from April 01 to June 30, 2018. Drug therapy problems were identified by using Cipolle’s and strand’s drug related problem classification method. Patient’s written informed consent was obtained after explaining the purpose of the study. Patient’s specific data were collected using structured questionnaire. Data were entered into Epi data version 4.0.2 and then exported to statistical software package version 21.0 for analysis. To identify predictors of drug therapy problems occurrence, multiple stepwise backward logistic regression analysis was done. The 95% CI was used to show the accuracy of data analysis and statistical significance was considered at p-value < 0.05. A total of 304 pediatric patients were included in the study. Of these, 226(74.3%) patients had at least one drug therapy problem during their hospital stay. A total of 356 drug therapy problems were identified among two hundred twenty six patients. Non-compliance (28.65%) and dose too low (27.53%) were the most common type of drug related problems while disease comorbidity [AOR=3.39, 95% CI= (1.89-6.08)], Polypharmacy [AOR=3.16, 95% CI= (1.61-6.20)] and more than six days stay in hospital [AOR=3.37, 95% CI= (1.71-6.64) were independent predictors of drug therapy problem occurrence. Drug therapy problems were common in pediatric patients with infectious disease in the study area. Presence of comorbidity, polypharmacy and prolonged hospital stay were the predictors of drug therapy problem in study area. Therefore, to overcome the significant gaps in pediatric pharmaceutical care, clinical pharmacists, Pediatricians, and other health care professionals have to work in collaboration.Keywords: drug therapy problem, pediatric, infectious disease, Ethiopia
Procedia PDF Downloads 153438 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI
Authors: Brennan Lodge
Abstract:
Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies
Procedia PDF Downloads 97437 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 285436 Extracting Plowing Forces for Aluminum 6061-T6 Using a Small Number of Drilling Experiments
Authors: Ilige S. Hage, Charbel Y. Seif
Abstract:
Forces measured during cutting operations are generated by the cutting process and include parasitic forces, known as edge forces. A fraction of these measured forces arises from the tertiary cutting zone, such as flank or edge forces. Most machining models are designed for sharp tools; where edge forces represent the portion of the measured forces associated with deviations of the tool from an ideal sharp geometry. Flank forces are challenging to isolate. The most common method involves plotting the force at a constant cutting speed against uncut chip thickness and then extrapolating to zero feed. The resulting positive intercept on the vertical axis is identified as the edge or plowing force. The aim of this research is to identify the effect of tool rake angle and cutting speeds on flank forces and to develop a force model as a function of tool rake angle and cutting speed for predicting plowing forces. Edge forces were identified based on a limited number of drilling experiments using a 10 mm twist drill, where lip edge cutting forces were collected from 2.5 mm pre-cored samples. Cutting lip forces were measured with feed rates varying from 0.04 to 0.64 mm/rev and spindle speeds ranging from 796 to 9868 rpm, at a sampling rate of 200 Hz. By using real-time force measurements as the drill enters the workpiece, this study provides an economical method for analyzing the effect of tool geometry and cutting conditions on generated cutting forces, reducing the number of required experimental setups. As a result, an empirical model predicting parasitic edge forces was developed function of the cutting velocity, tool rake angle, and clearance angle along the lip of the tool, demonstrating strong agreement with edge forces reported in the literature for Aluminum 6061-T6. The model achieved an R2 value of 0.92 and a mean square error of 4%, validating the accuracy of the proposed methodology. The presented methodology leverages variations in machining parameters. This approach contrasts with traditional machining experiments, where the turning process typically serves as the basis for force measurements and each experimental setup is characterized by a single cutting velocity, tool rake angle, and clearance angle.Keywords: drilling, plowing, edge forces, cutting force, torque
Procedia PDF Downloads 3435 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding
Procedia PDF Downloads 138434 Flow Boiling Heat Transfer at Low Mass and Heat Fluxes: Heat Transfer Coefficient, Flow Pattern Analysis and Correlation Assessment
Authors: Ernest Gyan Bediako, Petra Dancova, Tomas Vit
Abstract:
Flow boiling heat transfer remains an important area of research due to its relevance in thermal management systems and other applications. Despite the enormous work done in the field of flow boiling heat transfer over the years to understand how flow parameters such as mass flux, heat flux, saturation conditions and tube geometries influence the characteristics of flow boiling heat transfer, there are still many contradictions and lack of agreement on the actual mechanisms controlling heat transfer and how flow parameters impact the heat transfer. This work thus seeks to experimentally investigate the heat transfer characteristics and flow patterns at low mass fluxes, low heat fluxes and low saturation pressure conditions which are of less attention in literature but prevalent in refrigeration, air-conditioning and heat pump applications. In this study, flow boiling experiment was conducted for R134a working fluid in a 5 mm internal diameter stainless steel horizontal smooth tube with mass flux ranging from 80- 100 kg/m2 s, heat fluxes ranging from 3.55kW/m2 - 25.23 kW/m2 and saturation pressure of 460 kPa. Vapor quality ranged from 0 to 1. A well-known flow pattern map created by Wojtan et al. was used to predict the flow patterns noticed during the study. The experimental results were correlated with well-known flow boiling heat transfer correlations in literature. The findings show that, heat transfer coefficient was influenced by both mass flux and heat fluxes. However, for an increasing heat flux, nucleate boiling was observed to be the dominant mechanism controlling the heat transfer especially at low vapor quality region. For an increasing mass flux, convective boiling was the dominant mechanism controlling the heat transfer especially in the high vapor quality region. Also, the study observed an unusual high heat transfer coefficient at low vapor qualities which could be due to periodic wetting of the walls of the tube due to slug flow pattern and stratified wavy flow patterns. The flow patterns predicted by Wojtan et al. flow pattern map were mixture of slug and stratified wavy, purely stratified wavy and dry out. Statistical assessment of the experimental data with various well-known correlations from literature showed that, none of the correlations reported in literature could predicted the experimental data with enough accuracy.Keywords: flow boiling, heat transfer coefficient, mass flux, heat flux.
Procedia PDF Downloads 117433 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 70432 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 363431 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms
Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita
Abstract:
Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.Keywords: air quality, internet of things, artificial intelligence, smart home
Procedia PDF Downloads 94430 Conversational Assistive Technology of Visually Impaired Person for Social Interaction
Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer
Abstract:
Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.Keywords: dataset, visually impaired person, natural language process, human activity recognition
Procedia PDF Downloads 60429 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 478428 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways
Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman
Abstract:
Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)
Procedia PDF Downloads 130427 Association Between Type of Face Mask and Visual Analog Scale Scores During Pain Assessment
Authors: Merav Ben Natan, Yaniv Steinfeld, Sara Badash, Galina Shmilov, Milena Abramov, Danny Epstein, Yaniv Yonai, Eyal Berbalek, Yaron Berkovich
Abstract:
Introduction: Postoperative pain management is crucial for effective rehabilitation, with the Visual Analog Scale (VAS) being a common tool for assessing pain intensity due to its sensitivity and accuracy. However, challenges such as misunderstanding of instructions and discrepancies in pain reporting can affect its reliability. Additionally, the mandatory use of face masks during the COVID-19 pandemic may impair nonverbal and verbal communication, potentially impacting pain assessment and overall care quality. Aims: This study examines the association between the type of mask worn by health care professionals and the assessment of pain intensity in patients after orthopedic surgery using the visual analog scale (VAS). Design: A nonrandomized controlled trial was conducted among 176 patients hospitalized in an orthopedic department of a hospital located in northern-central Israel from January to March 2021. Methods: In the intervention group (n = 83), pain assessment using the VAS was performed by a healthcare professional wearing a transparent face mask, while in the control group (n = 93), pain assessment was performed by a healthcare professional wearing a standard nontransparent face mask. The initial assessment was performed by a nurse, and 15 minutes later, an additional assessment was performed by a physician. Results: Healthcare professionals wearing a standard non-transparent mask obtained higher VAS scores than healthcare professionals wearing a transparent mask. In addition, nurses obtained lower VAS scores than physicians. The discrepancy in VAS scores between nurses and physicians was found in 50% of cases. This discrepancy was more prevalent among female patients, patients after knee replacement or spinal surgery, and when health care professionals were wearing a standard nontransparent mask. Conclusions: This study supports the use of transparent face masks by healthcare professionals in an orthopedic department, particularly by nurses. In addition, this study supports the assumption of problems involving the reliability of VAS.Keywords: postoperative pain management, visual analog scale, face masks, orthopedic surgery
Procedia PDF Downloads 28426 Proteomic Analysis of Cytoplasmic Antigen from Brucella canis to Characterize Immunogenic Proteins Responded with Naturally Infected Dogs
Authors: J. J. Lee, S. R. Sung, E. J. Yum, S. C. Kim, B. H. Hyun, M. Her, H. S. Lee
Abstract:
Canine brucellosis is a critical problem in dogs leading to reproductive diseases which are mainly caused by Brucella canis. There are, nonetheless, not clear symptoms so that it may go unnoticed in most of the cases. Serodiagnosis for canine brucellosis has not been confirmed. Moreover, it has substantial difficulties due to broad cross-reactivity between the rough cell wall antigens of B. canis and heterospecific antibodies present in normal, uninfected dogs. Thus, this study was conducted to characterize the immunogenic proteins in cytoplasmic antigen (CPAg) of B. canis, which defined the antigenic sensitivity of the humoral antibody responses to B. canis-infected dogs. In analysis of B. canis CPAg, first, we extracted and purified the cytoplasmic proteins from cultured B. canis by hot-saline inactivation, ultrafiltration, sonication, and ultracentrifugation step by step according to the sonicated antigen extract method. For characterization of this antigen, we checked the sort and range of each protein on SDS-PAGE and verified the immunogenic proteins leading to reaction with antisera of B. canis-infected dogs. Selected immunodominant proteins were identified using MALDI-MS/MS. As a result, in an immunoproteomic assay, several polypeptides in CPAg on one or two-dimensional electrophoresis (DE) were specifically reacted to antisera from B. canis-infected dogs but not from non-infected dogs. The polypeptides with approximate 150, 80, 60, 52, 33, 26, 17, 15, 13, 11 kDa on 1-DE were dominantly recognized by antisera from B. canis-infected dogs. In the immunoblot profiles on 2-DE, ten immunodominant proteins in CPAg were detected with antisera of infected dogs between pI 3.5-6.5 at approximate 35 to 10 KDa, without any nonspecific reaction with sera in non-infected dogs. Ten immunodominant proteins identified by MALDI-MS/MS were identified as superoxide dismutase, bacteroferritin, amino acid ABC transporter substrate-binding protein, extracellular solute-binding protein family3, transaldolase, 26kDa periplasmic immunogenic protein, Rhizopine-binding protein, enoyl-CoA hydratase, arginase and type1 glyceraldehyde-3-phosphate dehydrogenase. Most of these proteins were determined by their cytoplasmic or periplasmic localization with metabolism and transporter functions. Consequently, this study discovered and identified the prominent immunogenic proteins in B. canis CPAg, highlighting that those antigenic proteins may accomplish a specific serodiagnosis for canine brucellosis. Furthermore, we will evaluate those immunodominant proteins for applying to the advanced diagnostic methods with high specificity and accuracy.Keywords: Brucella canis, Canine brucellosis, cytoplasmic antigen, immunogenic proteins
Procedia PDF Downloads 147425 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 13424 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition
Authors: Habtamu Garoma Debela, Gemechis File Duressa
Abstract:
In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent
Procedia PDF Downloads 143423 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 141422 Application of Artificial Intelligence in Market and Sales Network Management: Opportunities, Benefits, and Challenges
Authors: Mohamad Mahdi Namdari
Abstract:
In today's rapidly changing and evolving business competition, companies and organizations require advanced and efficient tools to manage their markets and sales networks. Big data analysis, quick response in competitive markets, process and operations optimization, and forecasting customer behavior are among the concerns of executive managers. Artificial intelligence, as one of the emerging technologies, has provided extensive capabilities in this regard. The use of artificial intelligence in market and sales network management can lead to improved efficiency, increased decision-making accuracy, and enhanced customer satisfaction. Specifically, AI algorithms can analyze vast amounts of data, identify complex patterns, and offer strategic suggestions to improve sales performance. However, many companies are still distant from effectively leveraging this technology, and those that do face challenges in fully exploiting AI's potential in market and sales network management. It appears that the general public's and even the managerial and academic communities' lack of knowledge of this technology has caused the managerial structure to lag behind the progress and development of artificial intelligence. Additionally, high costs, fear of change and employee resistance, lack of quality data production processes, the need for updating structures and processes, implementation issues, the need for specialized skills and technical equipment, and ethical and privacy concerns are among the factors preventing widespread use of this technology in organizations. Clarifying and explaining this technology, especially to the academic, managerial, and elite communities, can pave the way for a transformative beginning. The aim of this research is to elucidate the capacities of artificial intelligence in market and sales network management, identify its opportunities and benefits, and examine the existing challenges and obstacles. This research aims to leverage AI capabilities to provide a framework for enhancing market and sales network performance for managers. The results of this research can help managers and decision-makers adopt more effective strategies for business growth and development by better understanding the capabilities and limitations of artificial intelligence.Keywords: artificial intelligence, market management, sales network, big data analysis, decision-making, digital marketing
Procedia PDF Downloads 45421 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method
Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer
Abstract:
This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper
Procedia PDF Downloads 345420 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom
Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr
Abstract:
The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation
Procedia PDF Downloads 272419 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves
Procedia PDF Downloads 89418 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks
Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas
Abstract:
The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization
Procedia PDF Downloads 526417 Predicting Daily Patient Hospital Visits Using Machine Learning
Authors: Shreya Goyal
Abstract:
The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.Keywords: machine learning, SVM, HIPAA, data
Procedia PDF Downloads 66416 Characterisation of Human Attitudes in Software Requirements Elicitation
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana
Abstract:
It is evident that there has been progress in the development and innovation of tools, techniques and methods in the development of software. Even so, there are few methodologies that include the human factor from the point of view of motivation, emotions and impact on the work environment; aspects that, when mishandled or not taken into consideration, increase the iterations in the requirements elicitation phase. This generates a broad number of changes in the characteristics of the system during its developmental process and an overinvestment of resources to obtain a final product that, often, does not live up to the expectations and needs of the client. The human factors such as emotions or personality traits are naturally associated with the process of developing software. However, most of these jobs are oriented towards the analysis of the final users of the software and do not take into consideration the emotions and motivations of the members of the development team. Given that in the industry, the strategies to select the requirements engineers and/or the analysts do not take said factors into account, it is important to identify and describe the characteristics or personality traits in order to elicit requirements effectively. This research describes the main personality traits associated with the requirements elicitation tasks through the analysis of the existing literature on the topic and a compilation of our experiences as software development project managers in the academic and productive sectors; allowing for the characterisation of a suitable profile for this job. Moreover, a psychometric test is used as an information gathering technique, and it is applied to the personnel of some local companies in the software development sector. Such information has become an important asset in order to make a comparative analysis between the degree of effectiveness in the way their software development teams are formed and the proposed profile. The results show that of the software development companies studied: 53.58% have selected the personnel for the task of requirements elicitation adequately, 37.71% possess some of the characteristics to perform the task, and 10.71% are inadequate. From the previous information, it is possible to conclude that 46.42% of the requirements engineers selected by the companies could perform other roles more adequately; a change which could improve the performance and competitiveness of the work team and, indirectly, the quality of the product developed. Likewise, the research allowed for the validation of the pertinence and usefulness of the psychometric instrument as well as the accuracy of the characteristics for the profile of requirements engineer proposed as a reference.Keywords: emotions, human attitudes, personality traits, psychometric tests, requirements engineering
Procedia PDF Downloads 264415 Multi-Residue Analysis (GC-ECD) of Some Organochlorine Pesticides in Commercial Broiler Meat Marketed in Shivamogga City, Karnataka State, India
Authors: L. V. Lokesha, Jagadeesh S. Sanganal, Yogesh S. Gowda, Shekhar, N. B. Shridhar, N. Prakash, Prashantkumar Waghe, H. D. Narayanaswamy, Girish V. Kumar
Abstract:
Organochlorine (OC) insecticides are among the most important organotoxins and make a large group of pesticides. Physicochemical properties of these toxins, especially their lipophilicity, facilitate the absorption and storage of these toxins in the meat thus possess public health threat to humans. The presence of these toxins in broiler meat can be a quantitative and qualitative index for the presence of these toxins in animal bodies, which is attributed to Waste water of irrigation after spraying the crops, contaminated animal feeds with pesticides, polluted air are the potential sources of residues in animal products. Fifty broiler meat samples were collected from different retail outlets of Bengaluru city, Karnataka state, in ice cold conditions and later stored under -20°C until analysis. All the samples were subjected to Gas Chromatograph attached to Electron Capture Detector(GC-ECD, VARIAN make) screening and quantification of OC pesticides viz; Alachlor, Aldrin, Alpha-BHC, Beta-BHC, Dieldrin, Delta-BHC, o,p-DDE, p,p-DDE, o,p-DDD, p,p-DDD, o,p-DDT, p,p-DDT, Endosulfan-I, Endosulfan-II, Endosulfan Sulphate and Lindane(all the standards were procured from Merck). Extraction was undertaken by blending fifty grams (g) of meat sample with 50g Sodium Sulphate anahydrous, 120 ml of n-hexane, 120 ml acetone for 15 mins, extract is washed with distilled water and sample moisture is dried by sodium sulphate anahydrous, partitioning is done with 25 ml petroleum ether, 10 ml acetonitrile and 15 ml n-hexane shake vigorously for two minutes, sample clean up was done with florosil column. The reconstituted samples (using n-hexane) (Merck chem) were injected to Gas Chromatograph–Electron Capture Detector(GC-ECD). The present study reveals that, among the fifty chicken samples subjected for analysis, 60% (15/50), 32% (8/50), 28% (7/50), 20% (5/50) and 16% (4/50) of samples contaminated with DDTs, Delta-BHC, Dieldrin, Aldrin and Alachlor respectively. DDT metabolites, Delta-BHC were the most frequently detected OC pesticides. The detected levels of the pesticides were below the levels of MRL(according to Export Council of India notification for fresh poultry meat).Keywords: accuracy, gas chromatography, meat, pesticide, petroleum ether
Procedia PDF Downloads 329414 AI Predictive Modeling of Excited State Dynamics in OPV Materials
Authors: Pranav Gunhal., Krish Jhurani
Abstract:
This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling
Procedia PDF Downloads 121