Search results for: modified method of pin-in-plaster
13160 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method
Authors: Choukri Lekbir, Mira Mokhtari
Abstract:
Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster
Procedia PDF Downloads 42713159 Treatment of Interferograms Image of Perturbation Processes in Metallic Samples by Optical Method
Authors: Daira Radouane, Naim Boudmagh, Hamada Adel
Abstract:
The but of this handling is to use the technique of the shearing with a mechanism lapping machine of image: a prism of Wollaston. We want to characterize this prism in order to be able to employ it later on in an analysis by shearing. A prism of Wollaston is a prism produced in a birefringent material i.e. having two indexes of refraction. This prism is cleaved so as to present the directions associated with these indices in its face with entry. It should be noted that these directions are perpendicular between them.Keywords: non destructive control, aluminium, interferometry, treatment of image
Procedia PDF Downloads 33413158 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 40513157 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 3813156 Structural Inequality and Precarious Workforce: The Role of Labor Laws in Destabilizing the Labor Force in Iran
Authors: Iman Shabanzadeh
Abstract:
Over the last three decades, the main demands of the Iranian workforce have been focused on three areas: "The right to a decent wage", "The right to organize" and "The right to job security". In order to investigate and analyze this situation, the present study focuses on the component of job security. The purpose of the study is to figure out what mechanisms in Iran's Labor Law have led to the destabilization and undermining of workers' job security. The research method is descriptive-analytical. To collect information, library and document sources in the field of laws related to labor rights in Iran and, semi-structured interviews with experts have been used. In the data analysis stage, the qualitative content analysis method was also used. The trend analysis of the statistics related to the labor force situation in Iran in the last three decades shows that the employment structure has been facing an increase in the active population, but in the last decade, a large part of this population has been mainly active in the service sector, and contract-free enterprises, so a smaller share of this employment has insurance coverage and a larger share has underemployment. In this regard, the results of this study show that four contexts have been proposed as the main legal and executive mechanisms of labor instability in Iran, which are: 1) temporaryization of the labor force by providing different interpretations of labor law, 2) adjustment labor in the public sector and the emergence of manpower contracting companies, 3) the cessation of labor law protection of workers in small workshops and 4) the existence of numerous restrictions on the effective organization of workers. The theoretical conclusion of this article is that the main root of the challenges of the labor society and the destabilized workforce in Iran is the existence of structural inequalities in the field of labor security, whose traces can be seen in the legal provisions and executive regulations of this field.Keywords: inequality, precariat, temporaryization, labor force, labor law
Procedia PDF Downloads 6713155 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 13213154 Antihyperlipidemia Combination of Simvastatin and Herbal Drink (Conventional Drug Interaction Potential Study and Herbal As Prevention Adverse Effect on Combination Therapy Hyperlipidemia)
Authors: Gesti Prastiti, Maylina Adani, Yuyun darma A. N., M. Khilmi F., Yunita Wahyu Pratiwi
Abstract:
Combination therapy may allow interaction on two drugs or more that can give adverse effects on patients. Simvastatin is a drug of antihyperlipidemia it can interact with drugs which work on cytochrome P450 CYP3A4 because it can interfere the performance of simvastatin. Flavonoid found in plants can inhibit the cytochrome P450 CYP3A4 if taken with simvastatin and can increase simvastatin levels in the body and increases the potential side effects of simvastatin such as myopati and rhabdomyolysis. Green tea leaves and mint are herbal medicine which has the effect of antihiperlipidemia. This study aims to determine the potential interaction of simvastatin with herbal drinks (green tea leaves and mint). This research method are experimental post-test only control design. Test subjects were divided into 5 groups: normal group, negative control group, simvastatin group, a combination of green tea group and the combination group mint leaves. The study was conducted over 32 days and total cholesterol levels were analyzed by enzymatic colorimetric test method. Results of this study is the obtainment of average value of total cholesterol in each group, the normal group (65.92 mg/dL), the negative control group the average total cholesterol test in the normal group was (69.86 mg/dL), simvastatin group (58.96 mg/dL), the combination of green tea group (58.96 mg/dL), and the combination of mint leaves (63.68 mg/dL). The conclusion is between simvastatin combination therapy with herbal drinks have the potential for pharmacodynamic interactions with a synergistic effect, antagonist, and a powerful additive, so the combination therapy are no more effective than a single administration of simvastatin therapy.Keywords: hyperlipidemia, simvastatin, herbal drinks, green tea leaves, mint leaves, drug interactions
Procedia PDF Downloads 39813153 Highly Efficient Ca-Doped CuS Counter Electrodes for Quantum Dot Sensitized Solar Cells
Authors: Mohammed Panthakkal Abdul Muthalif, Shanmugasundaram Kanagaraj, Jumi Park, Hangyu Park, Youngson Choe
Abstract:
The present study reports the incorporation of calcium ions into the CuS counter electrodes (CEs) in order to modify the photovoltaic performance of quantum dot-sensitized solar cells (QDSSCs). Metal ion-doped CuS thin film was prepared by the chemical bath deposition (CBD) method on FTO substrate and used directly as counter electrodes for TiO₂/CdS/CdSe/ZnS photoanodes based QDSSCs. For the Ca-doped CuS thin films, copper nitrate and thioacetamide were used as anionic and cationic precursors. Calcium nitrate tetrahydrate was used as doping material. The surface morphology of Ca-doped CuS CEs indicates that the fragments are uniformly distributed, and the structure is densely packed with high crystallinity. The changes observed in the diffraction patterns suggest that Ca dopant can introduce increased disorder into CuS material structure. EDX analysis was employed to determine the elemental identification, and the results confirmed the presence of Cu, S, and Ca on the FTO glass substrate. The photovoltaic current density – voltage characteristics of Ca-doped CuS CEs shows the specific improvements in open circuit voltage decay (Voc) and short-circuit current density (Jsc). Electrochemical impedance spectroscopy results display that Ca-doped CuS CEs have greater electrocatalytic activity and charge transport capacity than bare CuS. All the experimental results indicate that 20% Ca-doped CuS CE based QDSSCs exhibit high power conversion efficiency (η) of 4.92%, short circuit current density of 15.47 mA cm⁻², open circuit photovoltage of 0.611 V, and fill factor (FF) of 0.521 under illumination of one sun.Keywords: Ca-doped CuS counter electrodes, surface morphology, chemical bath deposition method, electrocatalytic activity
Procedia PDF Downloads 16813152 Clustering-Based Threshold Model for Condition Rating of Concrete Bridge Decks
Authors: M. Alsharqawi, T. Zayed, S. Abu Dabous
Abstract:
To ensure safety and serviceability of bridge infrastructure, accurate condition assessment and rating methods are needed to provide basis for bridge Maintenance, Repair and Replacement (MRR) decisions. In North America, the common practices to assess condition of bridges are through visual inspection. These practices are limited to detect surface defects and external flaws. Further, the thresholds that define the severity of bridge deterioration are selected arbitrarily. The current research discusses the main deteriorations and defects identified during visual inspection and Non-Destructive Evaluation (NDE). NDE techniques are becoming popular in augmenting the visual examination during inspection to detect subsurface defects. Quality inspection data and accurate condition assessment and rating are the basis for determining appropriate MRR decisions. Thus, in this paper, a novel method for bridge condition assessment using the Quality Function Deployment (QFD) theory is utilized. The QFD model is designed to provide an integrated condition by evaluating both the surface and subsurface defects for concrete bridges. Moreover, an integrated condition rating index with four thresholds is developed based on the QFD condition assessment model and using K-means clustering technique. Twenty case studies are analyzed by applying the QFD model and implementing the developed rating index. The results from the analyzed case studies show that the proposed threshold model produces robust MRR recommendations consistent with decisions and recommendations made by bridge managers on these projects. The proposed method is expected to advance the state of the art of bridges condition assessment and rating.Keywords: concrete bridge decks, condition assessment and rating, quality function deployment, k-means clustering technique
Procedia PDF Downloads 22713151 A Study on the Impact of Covid-19 on Primary Healthcare Workers in Ekiti State, South-West Nigeria
Authors: Adeyinka Adeniran, Omowunmi Bakare, Esther Oluwole, Florence Chieme, Temitope Durojaiye, Modupe Akinyinka, Omobola Ojo, Babatunde Olujobi, Marcus Ilesanmi, Akintunde Ogunsakin
Abstract:
Introduction: Globally, COVID-19 has greatly impacted the human race physically, socially, mentally, and economically. However, healthcare workers seemed to bear the greatest impact. The study, therefore, sought to assess the impact of COVID-19 on the primary healthcare workers in Ekiti, South-west Nigeria. Methods: The study was a cross-sectional descriptive study using a quantitative data collection method of 716 primary healthcare workers in Ekiti state. Respondents were selected using an online convenience sampling method via their social media platforms. Data was collected, collated, and analyzed using SPSS version 25 software and presented as frequency tables, mean and standard deviation. Bivariate and multivariate analyses were conducted using a t-test, and the level of statistical significance was set at p<0.05. Results: Less than half (47.1%) of respondents were between 41-50 age group and a mean age of 44.4+6.4SD. A majority (89.4%) were female, and almost all (96.2%) were married. More than (90%) had ever heard of Coronavirus, and (85.8%) had to spend more money on activities of daily living such as transportation (90.1%), groceries (80.6%), assisting relations (95.8%) and sanitary measures (disinfection) at home (95.0%). COVID-19 had a huge negative impact on about (89.7%) of healthcare workers, with a mean score of 22+4.8. Conclusion: COVID-19 negatively impacted the daily living and professional duties of primary healthcare workers, which reflected their psychological, physical, social, and economic well-being. Disease outbreaks are unlikely to disappear in the near future. Hence, global proactive interventions and homegrown measures should be adopted to protect healthcare workers and save lives.Keywords: Covid-19, health workforce, primary health care, health systems, depression
Procedia PDF Downloads 8913150 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review
Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos
Abstract:
Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation
Procedia PDF Downloads 15813149 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 11013148 A Study on Factors Affecting (Building Information Modelling) BIM Implementation in European Renovation Projects
Authors: Fatemeh Daneshvartarigh
Abstract:
New technologies and applications have radically altered construction techniques in recent years. In order to anticipate how the building will act, perform, and appear, these technologies encompass a wide range of visualization, simulation, and analytic tools. These new technologies and applications have a considerable impact on completing construction projects in today's (architecture, engineering and construction)AEC industries. The rate of changes in BIM-related topics is different worldwide, and it depends on many factors, e.g., the national policies of each country. Therefore, there is a need for comprehensive research focused on a specific area with common characteristics. Therefore, one of the necessary measures to increase the use of this new approach is to examine the challenges and obstacles facing it. In this research, based on the Delphi method, at first, the background and related literature are reviewed. Then, using the knowledge obtained from the literature, a primary questionnaire is generated and filled by experts who are selected using snowball sampling. It covered the experts' attitudes towards implementing BIM in renovation projects and their view of the benefits and obstacles in this regard. By analyzing the primary questionnaire, the second group of experts is selected among the participants to be interviewed. The results are analyzed using Theme analysis. Six themes, including Management support, staff resistance, client willingness, Cost of software and implementation, the difficulty of implementation, and other reasons, are obtained. Then a final questionnaire is generated from the themes and filled by the same group of experts. The result is analyzed by the Fuzzy Delphi method, showing the exact ranking of the obtained themes. The final results show that management support, staff resistance, and client willingness are the most critical barrier to BIM usage in renovation projects.Keywords: building information modeling, BIM, BIM implementation, BIM barriers, BIM in renovation
Procedia PDF Downloads 17013147 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 34213146 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots
Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha
Abstract:
Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.Keywords: biosensor, dopamine, fluorescence, quantum dots
Procedia PDF Downloads 37313145 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 10613144 Grey Relational Analysis Coupled with Taguchi Method for Process Parameter Optimization of Friction Stir Welding on 6061 AA
Authors: Eyob Messele Sefene, Atinkut Atinafu Yilma
Abstract:
The highest strength-to-weight ratio criterion has fascinated increasing curiosity in virtually all areas where weight reduction is indispensable. One of the recent advances in manufacturing to achieve this intention endears friction stir welding (FSW). The process is widely used for joining similar and dissimilar non-ferrous materials. In FSW, the mechanical properties of the weld joints are impelled by property-selected process parameters. This paper presents verdicts of optimum process parameters in attempting to attain enhanced mechanical properties of the weld joint. The experiment was conducted on a 5 mm 6061 aluminum alloy sheet. A butt joint configuration was employed. Process parameters, rotational speed, traverse speed or feed rate, axial force, dwell time, tool material and tool profiles were utilized. Process parameters were also optimized, making use of a mixed L18 orthogonal array and the Grey relation analysis method with larger is better quality characteristics. The mechanical properties of the weld joint are examined through the tensile test, hardness test and liquid penetrant test at ambient temperature. ANOVA was conducted in order to investigate the significant process parameters. This research shows that dwell time, rotational speed, tool shape, and traverse speed have become significant, with a joint efficiency of about 82.58%. Nine confirmatory tests are conducted, and the results indicate that the average values of the grey relational grade fall within the 99% confidence interval. Hence the experiment is proven reliable.Keywords: friction stir welding, optimization, 6061 AA, Taguchi
Procedia PDF Downloads 10613143 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 10213142 Effect of Multi-Walled Carbon Nanotubes on Fuel Cell Membrane Performance
Authors: Rabindranath Jana, Biswajit Maity, Keka Rana
Abstract:
The most promising clean energy source is the fuel cell, since it does not generate toxic gases and other hazardous compounds. Again the direct methanol fuel cell (DMFC) is more user-friendly as it is easy to be miniaturized and suited as energy source for automobiles as well as domestic applications and portable devices. And unlike the hydrogen used for some fuel cells, methanol is a liquid that is easy to store and transport in conventional tanks. The most important part of a fuel cell is its membrane. Till now, an overall efficiency for a methanol fuel cell is reported to be about 20 ~ 25%. The lower efficiency of the cell may be due to the critical factors, e.g. slow reaction kinetics at the anode and methanol crossover. The oxidation of methanol is composed of a series of successive reactions creating formaldehyde and formic acid as intermediates that contribute to slow reaction rates and decreased cell voltage. Currently, the investigation of new anode catalysts to improve oxidation reaction rates is an active area of research as it applies to the methanol fuel cell. Surprisingly, there are very limited reports on nanostructured membranes, which are rather simple to manufacture with different tuneable compositions and are expected to allow only the proton permeation but not the methanol due to their molecular sizing effects and affinity to the membrane surface. We have developed a nanostructured fuel cell membrane from polydimethyl siloxane rubber (PDMS), ethylene methyl co-acrylate (EMA) and multi-walled carbon nanotubes (MWNTs). The effect of incorporating different proportions of f-MWNTs in polymer membrane has been studied. The introduction of f-MWNTs in polymer matrix modified the polymer structure, and therefore the properties of the device. The proton conductivity, measured by an AC impedance technique using open-frame and two-electrode cell and methanol permeability of the membranes was found to be dependent on the f-MWNTs loading. The proton conductivity of the membranes increases with increase in concentration of f-MWNTs concentration due to increased content of conductive materials. Measured methanol permeabilities at 60oC were found to be dependant on loading of f-MWNTs. The methanol permeability decreased from 1.5 x 10-6 cm²/s for pure film to 0.8 x 10-7 cm²/s for a membrane containing 0.5wt % f-MWNTs. This is due to increasing proportion of f-MWNTs, the matrix becomes more compact. From DSC melting curves it is clear that the polymer matrix with f-MWNTs is thermally stable. FT-IR studies show good interaction between EMA and f-MWNTs. XRD analysis shows good crystalline behavior of the prepared membranes. Significant cost savings can be achieved when using the blended films which contain less expensive polymers.Keywords: fuel cell membrane, polydimethyl siloxane rubber, carbon nanotubes, proton conductivity, methanol permeability
Procedia PDF Downloads 41413141 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 20813140 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project
Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen
Abstract:
This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project
Procedia PDF Downloads 17413139 Drone On-Time Obstacle Avoidance for Static and Dynamic Obstacles
Authors: Herath M. P. C. Jayaweera, Samer Hanoun
Abstract:
Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GME′s velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use, and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks, including their tendency to generate longer routes when the obstacles are sideways of the drone′s route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilized on most types of drones that have basic distance measurement sensors and autopilot-supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS-supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.Keywords: drones, force field methods, obstacle avoidance, path planning
Procedia PDF Downloads 9713138 Implementation Research on the Singapore Physical Activity and Nutrition Program: A Mixed-Method Evaluation
Authors: Elaine Wong
Abstract:
Introduction: The Singapore Physical Activity and Nutrition Study (SPANS) aimed to assess the effects of a community-based intervention on physical activity (PA) and nutrition behaviours as well as chronic disease risk factors for Singaporean women aged above 50 years. This article examines the participation, dose, fidelity, reach, satisfaction and reasons for completion and non-completion of the SPANS. Methods: The SPANS program integrated constructs of Social Cognitive Theory (SCT) and is composed of PA activities; nutrition workshops; dietary counselling coupled with motivational interviewing (MI) through phone calls; and text messages promoting healthy behaviours. Printed educational resources and health incentives were provided to participants. Data were collected via a mixed-method design strategy from a sample of 295 intervention participants. Quantitative data were collected using self-completed survey (n = 209); qualitative data were collected via research assistants’ notes, post feedback sessions and exit interviews with program completers (n = 13) and non-completers (n = 12). Results: Majority of participants reported high ‘satisfactory to excellent’ ratings for the program pace, suitability of interest and overall program (96.2-99.5%). Likewise, similar ratings for clarity of presentation; presentation skills, approachability, knowledge; and overall rating of trainers and program ambassadors were achieved (98.6-100%). Phone dietary counselling had the highest level of participation (72%) at less than or equal to 75% attendance rate followed by nutrition workshops (65%) and PA classes (60%). Attrition rate of the program was 19%; major reasons for withdrawal were personal commitments, relocation and health issues. All participants found the program resources to be colourful, informative and practical for their own reference. Reasons for program completion and maintenance were: desired health benefits; social bonding opportunities and to learn more about PA and nutrition. Conclusions: Process evaluation serves as an appropriate tool to identify recruitment challenges, effective intervention strategies and to ensure program fidelity. Program participants were satisfied with the educational resources, program components and delivery strategies implemented by the trainers and program ambassadors. The combination of printed materials and intervention components, when guided by the SCT and MI, were supportive in encouraging and reinforcing lifestyle behavioural changes. Mixed method evaluation approaches are integral processes to pinpoint barriers, motivators, improvements and effective program components in optimising the health status of Singaporean women.Keywords: process evaluation, Singapore, older adults, lifestyle changes, program challenges
Procedia PDF Downloads 12713137 Measuring Biobased Content of Building Materials Using Carbon-14 Testing
Authors: Haley Gershon
Abstract:
The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials
Procedia PDF Downloads 16313136 Exploring Women's Needs Referring to Health Care Centers for Doing Pap Smear Test
Authors: Arezoo Fallahi, Fateme Aslibigi, Parvaneh Taymoori, Babak Nematshahrbabaki
Abstract:
Background and Aims: Cancer of the cervix, one of cancer-related death, is the second most common cancer in women worldwide. It develops over time but it is one of the most preventable types of cancer and there is the available proper screening program for its preventing. Since Pap smear test is vital to prevent and control of disease but women do not accomplish it regularly. Therefore, this study was aimed to explore women's needs referring to health care centers for doing Pap smear test. Material and methods: In this study, an inductive qualitative method with content analysis approach was used. This survey was done in varamin city (is located capital of Iran) in year 2014. Through the purposive sampling 15 women's view of point referring to health care centers of for doing Pap smear test was surveyed. Inclusion criteria were: 20-50 years old married women, having experience Pap smear test and attendance to participate in the Study. Recorded semi- structured interviews were typed and analyzed through of content analysis method. To obtain trustworthiness and rigor of the data, the criteria of credibility, dependability, confirmability and transferability was used. Results: During the data analysis, four main categories of “role of health care team”, “role of organizations”, “social support” and “policies and administration system” were developed. The participants emphasized on making motivational rules and coordination among organizations to do behaviors related to women health. Conclusion: The findings of study showed that doing Pap smear test are attributed to appropriate and intimate interactions with health professionals, family support, encouraging legislation and policies and coordination and notification of organizations. Therefore, designers and stockholders of policies and health system should more consider to growth and involve other organizations toward women's health.Keywords: qualitative approach, pap smear test, women, health care centers
Procedia PDF Downloads 50013135 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 14113134 Ethical, Legal and Societal Aspects of Unmanned Aircraft in Defence
Authors: Henning Lahmann, Benjamyn I. Scott, Bart Custers
Abstract:
Suboptimal adoption of AI in defence organisations carries risks for the protection of the freedom, safety, and security of society. Despite the vast opportunities that defence AI-technology presents, there are also a variety of ethical, legal, and societal concerns. To ensure the successful use of AI technology by the military, ethical, legal, and societal aspects (ELSA) need to be considered, and their concerns continuously addressed at all levels. This includes ELSA considerations during the design, manufacturing and maintenance of AI-based systems, as well as its utilisation via appropriate military doctrine and training. This raises the question how defence organisations can remain strategically competitive and at the edge of military innovation, while respecting the values of its citizens. This paper will explain the set-up and share preliminary results of a 4-year research project commissioned by the National Research Council in the Netherlands on the ethical, legal, and societal aspects of AI in defence. The project plans to develop a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain. In order to achieve this, the lab shall devise a context-dependent methodology that focuses on the ‘analysis’, ‘design’ and ‘evaluation’ of ELSA of AI-based applications within the military context, which include inter alia unmanned aircraft. This is bolstered as the Lab also recognises and complements the existing methods in regards to human-machine teaming, explainable algorithms, and value-sensitive design. Such methods will be modified for the military context and applied to pertinent case-studies. These case-studies include, among others, the application of autonomous robots (incl. semi- autonomous) and AI-based methods against cognitive warfare. As the perception of the application of AI in the military context, by both society and defence personnel, is important, the Lab will study how these perceptions evolve and vary in different contexts. Furthermore, the Lab will monitor – as they may influence people’s perception – developments in the global technological, military and societal spheres. Although the emphasis of the research project is on different forms of AI in defence, it focuses on several case studies. One of these case studies is on unmanned aircraft, which will also be the focus of the paper. Hence, ethical, legal, and societal aspects of unmanned aircraft in the defence domain will be discussed in detail, including but not limited to privacy issues. Typical other issues concern security (for people, objects, data or other aircraft), privacy (sensitive data, hindrance, annoyance, data collection, function creep), chilling effects, PlayStation mentality, and PTSD.Keywords: autonomous weapon systems, unmanned aircraft, human-machine teaming, meaningful human control, value-sensitive design
Procedia PDF Downloads 9513133 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes
Procedia PDF Downloads 30113132 Estimation of Microbial-N Supply to Small Intestine in Angora Goats Fed by Different Roughage Sources
Authors: Nurcan Cetinkaya
Abstract:
The aim of the study was to estimate the microbial-N flow to small intestine based on daily urinary purine derivatives(PD) mainly xanthine, hypoxanthine, uric acid and allantoin excretion in Angora goats fed by grass hay and concentrate (Period I); barley straw and concentrate (Period II). Daily urine samples were collected during last 3 days of each period from 10 individually penned Angora bucks( LW 30-35 Kg, 2-3 years old) receiving ad libitum grass hay or barley straw and 300 g/d concentrate. Fresh water was always available. 4N H2SO4 was added to collected daily urine .samples to keep pH under 3 to avoid of uric acid precipitation. Diluted urine samples were stored at -20°C until analysis. Urine samples were analyzed for xanthine, hypoxanthine, uric acid, allantoin and creatinine by High-Performance Liquid Chromatographic Method (HPLC). Urine was diluted 1:15 in ratio with water and duplicate samples were prepared for HPLC analysis. Calculated mean levels (n=60) for urinary xanthine, hypoxanthine, uric acid, allantoin, total PD and creatinine excretion were 0.39±0.02 , 0.26±0.03, 0.59±0.06, 5.91±0.50, 7.15±0.57 and 3.75±0.40 mmol/L for Period I respectively; 0.35±0.03, 0.21±0.02, 0.55±0.05, 5.60±0.47, 6.71±0.46 and 3.73±0.41 mmol/L for Period II respectively.Mean values of Period I and II were significantly different (P< 0.05) except creatinine excretion. Estimated mean microbial-N supply to the small intestine for Period I and II in Angora goats were 5.72±0.46 and 5.41±0.61 g N/d respectively. The effects of grass hay and barley straw feeding on microbial-N supply to small intestine were found significantly different (P< 0.05). In conclusion, grass hay showed a better effect on the ruminal microbial protein synthesis compared to barley straw, therefore; grass hay is suggested as roughage source in Angora goat feeding.Keywords: angora goat, HPLC method, microbial-N supply to small intestine, urinary purine derivatives
Procedia PDF Downloads 22813131 Monodisperse Quaternary Cobalt Chromium Ferrite Nanoparticles Synthesised from a Single Source Precursor
Authors: Khadijat O. Abdulwahab, Mohammad A. Malik, Paul O’Brien, Grigore A. Timco, Floriana Tuna
Abstract:
The synthesis of spinel ferrite nanoparticles with a narrow size distribution is very crucial in their numerous applications including information storage, hyperthermia treatment, drug delivery, contrast agent in magnetic resonance imaging, catalysis, sensors, and environmental remediation. Ferrites have the general formula MFe2O4 (M = Fe, Co, Mn, Ni, Zn etc.) and possess remarkable electrical and magnetic properties which depend on the cations, method of preparation, size and their site occupancies. To the best of our knowledge, there are no reports on the use of a single source precursor to synthesise quaternary ferrite nanoparticles. Herein, we demonstrated the use of trimetallic iron pivalate cluster [CrCoFeO(O2CtBu)6(HO2CtBu)3] as a single source precursor to synthesise monodisperse cobalt chromium ferrite (FeCoCrO4) nanoparticles by the hot injection thermolysis method. The precursor was thermolysed in oleylamine, oleic acid, with diphenyl ether as solvent at its boiling point (260°C). The effect of concentration on the stoichiometry, phases or morphology of the nanoparticles was studied. The p-XRD patterns of the nanoparticles obtained at both concentrations were matched with cubic iron cobalt chromium ferrite (FeCoCrO4). TEM showed that a more monodispersed spherical ferrite nanoparticles of average diameter 4.0 ± 0.4 nm were obtained at higher precursor concentration. Magnetic measurements revealed that all the ferrite particles are superparamagnetic at room temperature. The nanoparticles were characterised by Powder X-ray Diffraction (p-XRD), Transmission Electron Microscopy (TEM), Inductively Coupled Plasma (ICP), Electron Probe Microanalysis (EPMA), Energy Dispersive Spectroscopy (EDS) and Super Conducting Quantum Interference Device (SQUID).Keywords: quaternary ferrite nanoparticles, single source precursor, monodisperse, cobalt chromium ferrite, colloidal, hot injection thermolysis
Procedia PDF Downloads 278